Pages

Search

defragmenting? - Forums Linux

defragmenting? - Forums Linux


defragmenting?

Posted: 20 Oct 2005 10:24 AM PDT

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

com wrote: 

<question mode="rhetorical">

Why would you want to defragment a filesystem?
What benefits would you get from defragmenting a filesystem?
What drawbacks are there to not defragmenting a filesystem?

</question>
 


- --
Lew Pitcher
IT Specialist, Enterprise Data Systems,
Enterprise Technology Solutions, TD Bank Financial Group

(Opinions expressed are my own, not my employers')
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (MingW32)

iD8DBQFDV946agVFX4UWr64RAuovAKC507IY+pq/6sUQ2HHXnwlVEZtq/wCg8C98
1qH0EOhYOLqPIWYEhE0bVX4=
=6Z0g
-----END PGP SIGNATURE-----

Switching off the box after shutdown

Posted: 20 Oct 2005 04:01 AM PDT

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Daniel Böhmer wrote: 

Outside of enabling the kernel support, you'll also need to
- - ensure that the apm module is loaded (if you use loadable module support), and
- - ensure that the apmd daemon is running

- --
Lew Pitcher
IT Specialist, Enterprise Data Systems,
Enterprise Technology Solutions, TD Bank Financial Group

(Opinions expressed are my own, not my employers')
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (MingW32)

iD8DBQFDV4j2agVFX4UWr64RApG2AJ0dNUkWia7rVA6AFPqzDq Li6X+cMgCfaWVM
g0++xNn1hlD8HKmZm+EeZbI=
=ce+t
-----END PGP SIGNATURE-----

Kodak DVC325 webcam driver?

Posted: 19 Oct 2005 11:36 AM PDT

Lew Pitcher wrote: 

Thanks Lew; I don't know why my searches never showed that. I have downloaded it
and will try it out..

What can I check to fix system performance?

Posted: 19 Oct 2005 10:47 AM PDT

On Wed, 19 Oct 2005 19:47:42 +0200, tuxworks <com> wrote: 

Are these memory data taken while the system was running that make?

If so, it seems you have plenty of memory available: 3.5 Gbytes.
Also notice the amount of swap usage: In: 0, out: 0.

(When the kernel does not need the memory for anything else, it keeps
copies of files you have read of written lately, in a cache, in case
you access them again. That makes it look like that there is never
much memory free, but you should really consider "cache" as equivalent
to "free" - except a resonable amount matching the amount of cache
hits per 10 seconds or so.)

The next time you post data, please do not use Google to post.
As you see above, Google breaks the lines, making it very hard to
read this kind of tabular data. Since you obviously have access to
Linux computers, get a newsreader. Myself I am using Opera, and,
well, not bad, but not perfect, some frustrating bugs and limitations.
I see people are talking about Pan, and I am considering giving it
a try. I used Gnus (standard part of Emacs), quite good, but the docs
are horrible: They try to be good, but unless you know the lingo,
there is no hope. And no, you don't know the lingo.

As newbies go, you are unusually well behaved, posting a selection
of data. Good! Unfortunately (and you are are aware of it,it seems,
you hint at it) this problem is quite difficult and needs much more
data and a bit of thinking and asking and replying.

Myself I am among the less expert in tuning issues, so I hope others
will join in with suggestions, but here are my first reactions.

Is it possible that the increase in time is due to changes in the
software package or its makefiles?

Builds can usually be made a lot faster by adding some "-j" options
to make commands. This is true on uniprocessor systems too. I saw you
said your system has 4 CPUs, then consider having at least 16 processes
running, four per CPU. Quite likely, the optimum is closer to 10
processes per CPU, or 40 in all. (This is based on my own experience,
but that was with a very different kind of systems about 10 years ago.
If other readers on this list know differently and specifically for
Linux systems or modern systems, please speak up.)

However, this has to be done judiciously. It is far from trivial to do.
But I brought a build down from 30 hours to 13 that way. (Back then
gmake did not have "-j", so I hacked a version of bash that allowed me
to run 300 processes with "&", and the shell would only issue as many
processes as I allowed in a special variable, and automatically issue
another from the queue when any process terminated. I found 25 gave good
results. I did not really experiment with it, I just was so happy to
come down to 13 hours. But I believe the OS in question was more geared
for batch processing than is customary today, that is why I would
consider trying first with lower numbers per CPU.)

Consider the disk structure. See if you can arrange that the build
reads one disk while it writes another. This measure and the -j option
measure win time taken from the same pool: each measure can, with luck,
give great returns, but when both are applied, the second one does not
win nearly as much.

First of all learn more about what is going on when the system is
loaded. Run "top". By default the list is sorted with the processes
that use more CPU at the top. If the system becomes memory or disk
constraned it is better to sort by other columns. Hit the ">" key.
Then the list gets sorted by the next column, %mem (rather than %cpu).
Play with ">" and "<". Do "man top" to learn about displaying other
columns with other kinds of data.

Take some snapshots by copying and pasting into an editor. (Since you
are new to Linux, do you know the Unix way of copying and pasting?
There is no "Ctrl-C Ctrl-V", but just mark what you want to copy,
and *middle*-click where you want to paste. You do have a mouse
with three buttons? or you have configured the trick of using both
buttons at once to emulate the middle button?) If you need more time
between screen updates (default 1 second), find the commands to change
that.

In the head part of the "top" screen, look at the "%wa" item in the
Cpu(s) line. It measures the time the CPU is waiting, mostly this is
disk wait. Notice that this is different from idle.

Tell us about the disks your system has. Is the system using any
network mounted storage? If so, we will need data about the network
performance too.

Be specific. If you are using sotware raid or logical volumes, use
the tools to list the complete configuration and post it. Also post
the output of "fdisk -l /dev/hda" and similarly for the other disks.
You probably have most of your data on scsi disks or harware raid
units, use the appropriate tools to list the partitioning (if
applicable) and setup. Post the file /etc/fstab, and tell us what
mount points contain the source file tree. If the build tree is separate
from the source tree, tell us what mount point contains it.

If the build writes files in the source tree, consider if you can
modify the build process to have a separate build tree where all
but the pristine source files are written. Then consider using a tmpfs
to hold the build tree. This means the files are mostly not written
to disk at all. The "make install" step should write the final results
to a disk-backed file system, of course.

-Enrique

RH ES4 update 2 installation on a HP tc4100 with NetRaid1 card

Posted: 18 Oct 2005 12:13 PM PDT


"Jean-David Beyer" <com> wrote in message
news:supernews.com... 

Yes, it's a *86. I'm trying the 30 day trial from their website which
includes just the software without support, So I thought this was the right
place to ask about installation issues.


Mysql

Posted: 18 Oct 2005 11:10 AM PDT

On Tue, 18 Oct 2005 20:10:51 +0200, test
<nl> wrote: 
You don't run udeb files, you install them. .udeb files are used in the
initial installation process, after that you install .deb files using
dpkg, apt-get, aptitude or synaptic.



--
"I have just one word for you, my boy...plastics."
- from "The Graduate"

writeable ntfs

Posted: 18 Oct 2005 07:39 AM PDT

On Tue, 18 Oct 2005 16:39:53 +0200, Abanowicz Tomasz <pl> wrote:
 

Linux NTFS support had a sad history of problems, and it was almost
given up until a new rewrite recently was included in mainline. I am
not sure of when that happened, it may have been as late as 2.6.13.

Due to all the problems, the previous version was crippled to write
only inside existing files, that is, only when it was not necessary
to alter the metadata structure. That means no new files, no renames,
no moves, no append, not even delete. It was essentially a read
function turned ont a write.

Due to license fears and other issues, distributions may or may not
include the new ntfs code in their kernels.
 

I don't know what error number the old ntfs would use to say "no"
to unsupported operations.

---

By the way, I saw you got an answer from Peter Breuer (Hello Peter!)
but for some reason it was not threaded with your post. It seems to
contain approximately the same information as I give you here, but in
a different style. If you haven't read it yet, his posts are best
read wearing red glasses and a crocodile skin. If you already read it,
be honored, it is much more friendly than many others. He appears
(in my view) to be working hard to become more friendly and informative,
although we may differ in our interpretation of pedagogical issues.

Good luck,
Enrique

Corrupted persistent superblock - repairable?

Posted: 18 Oct 2005 07:37 AM PDT

Enrique --

Thank you for your reply. It worked perfectly! Thanks for your help. :-))

Best,

Rohan Beckles
net

--
ASUS PC-DL Deluxe
Intel Xeon @ 3.06GHz (dual)
Corsair TwinX2048-3200C2-PT
Western Digital WD2500SD (dual, RAID1)
ABIT Siluro Ti4200-8x DOTH 128MB OTES

Linux tool can do email with attachment from command line

Posted: 17 Oct 2005 08:38 PM PDT

Matt Payton <com> writes:
 
 

So, install it.
 

Pine?

 
 
 
 

Fedora 3/Windows Problem

Posted: 17 Oct 2005 06:10 PM PDT

I used the grub disk and reconfigured the grub configuration file.
Thanks for your Help, I really appreciate it.

Driver update available for Suse 9.1 (kernel 2.6) to resolve problem loading windows with GRUB

Posted: 17 Oct 2005 04:24 PM PDT

I was only trying to be useful here......

CD7DVD burner k3b damaged

Posted: 17 Oct 2005 07:18 AM PDT

Enrique Perez-Terron schrieb:
 

Hi Enrique -
 

thank you for the answer.

The subject line had been damaged also. :-(
It should read

CD and DVD burner software k3b damaged

which means the Linux software k3b.
k3b is a CD and DVD burner for Linux working with KDE
or GNOME. It is delivered with the SuSE distribution
of Linux (Now the german company is bought by Novell).
 

No, it is no port from Windows but developed for Linux.
I am not very much skilled in the Linux system but
want to learn and use it. That's why I could not find
any solution for that software problem.
I will check your hints this evening and tell what I
found.

Regards Udo

Linux, Windows Dual boot problem with GRUB

Posted: 16 Oct 2005 07:17 PM PDT

On Tue, 18 Oct 2005 01:21:35 +0200, Kunal <com> wrote:
 

Thanks for your report, I was not aware of it.

-Enrique

Can't used parted to resize recent ext2 partitions, what has changed?

Posted: 16 Oct 2005 10:19 AM PDT

On Mon, 17 Oct 2005 23:56:36 +0200, Chris F Clark <TheWorld.com> wrote:
 

Oh yes, don't remind me about that. :)
 

Consider not using parted, but resize2fs, combined with fdisk, and
possibly dd.

Or consider relabeling the partition again after the move. It takes
(quite) some time, but should not be problematic.

With fdisk you can control the exact sector number each partition
starts on, and the size of each partition in sectors.

Two caveats: Apart from the flexibility, the documentation and the
interface is *bad*. Take valium first. Explore the 'm' and 'x'
commands. Sometimes fdisk tells me the kernel failed to adjust
it's idea of the partition table afterwards. I have no idea what
is happening then, but rebooting always forces the kernel to
learn the new paritition sizes. Until then do not mount anything,
and do not write to the new partitions through eg. /dev/hda8,
use only /dev/hda, which does not depend on the partition table.

Second: Changing this data does not change the data on the disk,
so you must be very methodical and write down the contents of the
partition table before you do any changes, so you can set things
back if something goes wrong. If you move the start of a partition,
you must separately move the contents of the disk sectors.
And remember, if you want to shift the sectors UP, e.g from 100000 -
129999 to 110000 - 139999, then you must copy the last sector before
you overwrite it! (I have never done this, and don't even know if
I know of a tool that will move sectors in reverse order. But in the
above example it would be enough to copy 120000 - 129999 to 130000
- 139999, then 110000 - 119999 to 120000 - 129999, and then finally
100000 - 109999 to 110000 - 119999, three dd commands.

A third caveat: for what I know, parted perhaps does exactly what
I am suggesting, and fails because resize2fs does not yet handle
the newest attributes. I don't know if this is the case, I just
fantsize.

-Enrique