[131065790010] |How to scale xvfb? [131065790020] |I'm looking to develop a web-application that translates request parameters into a PNG/GIF chart. [131065790030] |I'm looking to reuse an open source charting library, but I have concerns about the scalability. [131065790040] |A long time ago, we were using a single Xvfb buffer to render our image into, but that basically meant servicing only one incoming request at a time. [131065790050] |According to the man page it seems we can create numerous buffers? [131065790060] |If so, is it simply ensuring there is enough physical RAM to prevent excessive paging? [131065800010] |You can use xvfb-run your-command. [131065800020] |It takes care of set up and clean up. [131065800030] |From the man page: [131065800040] |xvfb-run is a wrapper for the Xvfb(1x) command which simplifies the task of running commands (typically an X client, or a script containing a list of clients to be run) within a virtual X server environment. xvfb-run sets up an X authority file, writes a cookie to it (see xauth(1x)) and then starts the Xvfb X server as a background process. [131065800050] |The process ID of Xvfb is stored for later use. [131065800060] |The specified command is then run using the X display corresponding to the Xvfb server just started and the X authority file created earlier. [131065800070] |When the command exits, its status is saved, the Xvfb server is killed (using the process ID stored earlier), the X authority cookie removed, and the authority file deleted. xvfb-run then exits with the exit status of command. [131065810010] |Simple CLI rss reader, that can print only subject and uri, based on new since time interval [131065810020] |So what I'm looking for is easy enough to code, but I'm wondering if it already exists, so I don't going releasing duplicate code. [131065810030] |I need a feed reader, that prints the subject, and uri on one line, to stdout. [131065810040] |It should be able to be configured (not necessarily by config file) to only show new feeds in say the last 5 minutes. [131065810050] |The reason I want this is because I'm coding my ping.fm replacement. [131065810060] |I basically want to be able to do something like feedreader | pingit where pingit will do, foreach line of input, make a separate post. [131065810070] |In this way I'll also be able to echo -n "my social post" | pingit I'll probably also make pingit "post" work. [131065820010] |Look into rsstail, especially -oil options [131065830010] |In a side conversation mst recommended perlanet for this purpose. [131065840010] |What's the difference between 32 and 64 bit linux? [131065840020] |What exactly are the implications? [131065840030] |Can a 32 bit linux run applications that are compiled as 64 bit? [131065840040] |Or vice versa? [131065840050] |I just got a new machine with an i5 processor, I installed a copy of Ubuntu 10.10 which seems to be i686 which I new realize is 32 bit, but I think I have some 64 bit apps installed. [131065840060] |Can this be right? [131065850010] |You need a 64bit Operating System to run applications compiled for 64bit systems. [131065850020] |You will, however, be able to run 32bit applications as well. [131065850030] |It just doesn't work the other way around. [131065860010] |bulk rename (or correctly display) files with special characters [131065860020] |I have a bunch of directories and subdirectories that contain files with special characters, like this file: [131065860030] |Find reveals an escape sequence: [131065860040] |The only reason I can even type their names on the console is because of tab completion. [131065860050] |This also means I can rename them manually (and strip the special character). [131065860060] |I've set LC_ALL to UTF-8, which does not seem to help (also not on a new shell): [131065860070] |I'm connecting to the machine using ssh from my mac. [131065860080] |It's an Ubuntu install: [131065860090] |Shell is Bash, TERM is set to xterm-color. [131065860100] |These files have been there for quite a while, and they have not been created using that install of Ubuntu. [131065860110] |So I don't know what the system encoding settings used to be. [131065860120] |I've tried things along the lines of: [131065860130] |But I can't find a solution that does everything I want: [131065860140] |
  • Identify all files that have undisplayable characters (the above ignores way too much)
  • [131065860150] |
  • For all those files in a directory tree (recursively), execute mv oldname newname
  • [131065860160] |
  • Optionally, the ability to transliterate special characters such as ä to a (not required, but would be awesome)
  • [131065860170] |OR [131065860180] |
  • Correctly display all these files (and no errors in applications when trying to open them)
  • [131065860190] |I have bits and pieces, like iterating over all files and moving them, but identifying the files and formatting them correctly for the mv command seems to be the hard part. [131065860200] |Any extra information as to why they do not display correctly, or how to "guess" the correct encoding are also welcome. [131065860210] |(I've tried convmv but it doesn't seem to do exactly what I want: http://j3e.de/linux/convmv/) [131065870010] |I guess you see this invalid character because the name contains a byte sequence that isn't valid UTF-8. [131065870020] |File names on typical unix filesystems (including yours) are byte strings, and it's up to applications to decide on what encoding to use. [131065870030] |Nowadays, there is a trend to use UTF-8, but it's not universal, especially in locales that could never live with plain ASCII and have been using other encodings since before UTF-8 even existed. [131065870040] |Try LC_CTYPE=en_US.iso88591 ls to see if the file name makes sense in ISO-8859-1 (latin-1). [131065870050] |If it doesn't, try other locales. [131065870060] |Note that only the LC_CTYPE locale setting matters here. [131065870070] |In a UTF-8 locale, the following command will show you all files whose name is not valid UTF-8: [131065870080] |You can check if they make more sense in another locale with recode or iconv: [131065870090] |Once you've determined that a bunch of file names are in a certain encoding (e.g. latin1), one way to rename them is [131065870100] |This uses the perl rename command available on Debian and Ubuntu. [131065870110] |You can pass it -n to show what it would be doing without actually renaming the files. [131065880010] |Where are the mount points defined in CentOS? [131065880020] |There is a mount point at /mnt/mountname and it goes to a large SAN with lots of storage but it's mounted as a read-only filesystem. [131065880030] |I want to add a mount point with a different name that's read-write. [131065880040] |I've looked in /etc/fstab and /etc/vfstab (which doesn't exist) and there is nothing there that mentions /mnt/mountname. [131065880050] |Where else can I look? [131065890010] |The mount mounts are defined at /etc/fstab however the file system mounted at /mnt/mountname may have been mounted "by hand" or something may have mounted it after the system started. [131065890020] |IE, someone may have have just typed the following in the terminal to mount it: [131065890030] |If you want to add /mnt/mountname to /etc/fstab and/or want to use it as a template for the new one you're adding, you can run the following to show you what the line in /etc/fstab should look like: [131065890040] |If that doesn't work, try 'cat /proc/mounts'. [131065900010] |find searching in parent directories instead of subdirectories [131065900020] |I'm nested deep in a file tree, and I'd like to find which parent directory contains a file. [131065900030] |E.g. [131065900040] |I'm in a set of nested Git repositories and want to find the .git directory controlling the files I'm currently at. [131065900050] |I'd hope for something like find -searchup -iname ".git" [131065910010] |Find can't do it. [131065910020] |I can't think of anything simpler than a shell loop. [131065910030] |(Untested, assumes there is no /.git) [131065920010] |will print out the top level directory of the current repository, if you are in one. [131065920020] |Other related options: [131065930010] |Use of O_DIRECT on Linux [131065930020] |If this question is too programmer oriented, let me know. [131065930030] |I wonder if there are people familiar with the O_DIRECT flag for the open() system call on Linux 2.6? [131065930040] |Linus disparages its use, however high performance file writing seems to indicate its use. [131065930050] |I would like to know of any real world experience and recommendations. [131065930060] |More info: The application that I am using does maintain its own cache, and in so doing attains an average of 5x or more speed up by doing so. [131065930070] |When writing to file, the contents of the cache must be written out to the filesystem cache, which seems redundant and a performance concern. [131065940010] |Ok, you ask for experiences, this makes the question a little subjective and argumentative, but passable. [131065940020] |Linus said that referring to the uses that people usually attribute to O_DIRECT, and for those uses, IMO Linus is mostly correct. [131065940030] |Even if you do direct I/O, you cannot transfer data to/from devices directly to your program statements, you need a buffer that is filled (by the program or the device) and transferred through a system call to the other end. [131065940040] |Also, to make it efficient, you will not want to reread something you just already read, in case you need it again. [131065940050] |So you need some sort of cache... and it is exactly that that the kernel provides without O_DIRECT, a page cache! [131065940060] |Why not use that? [131065940070] |It also comes with benefits if more processes want to access the same file concurrently, it would be a disaster with O_DIRECT. [131065940080] |Having said that, O_DIRECT has its uses: If for some reason you need to get data directly from the block device. [131065940090] |It has nothing to do with performance. [131065940100] |People using O_DIRECT for performance usually come from systems with bad page cache algorithms, or without POSIX advice mechanisms, or even people mindlessly repeating what other people have said. [131065940110] |To avoid these problems, O_DIRECT was a solution. [131065940120] |Linux, OTOH, has the philosophy that you should fix the real underlying problem, and the underlying problem was OSs that did a bad job with page caching. [131065940130] |I used O_DIRECT for a simple implementation of cat to find a memory error in my machine. [131065940140] |This is one valid use for O_DIRECT. [131065940150] |There is nothing to do with performance. [131065950010] |Note that using O_DIRECT is liable to fail in newer kernels with newer file systems. [131065950020] |See this bug report for example. [131065950030] |So not only is the use often dubious, it will likely not work at all in the coming generation of Linux distributions. [131065950040] |So I would not bet the performance of my code on it, even if you happen to be able to prove that it might have a benefit. [131065960010] |Multiple Mysql databases backup tool? [131065960020] |My OS is Ubuntu 10.10. [131065960030] |I have several small sites (all on various shared hosts) and I'm thinking that it'd be probably good idea to regularly backup mysql databases from these remote shared hosts. [131065960040] |I currently do it manually and it seem to be while not laborious but definitely boring process (phpmyadmin). :) [131065960050] |How would you guys go about it (no shh access available)? [131065960060] |If do it like this: [131065960070] |does that mean that password is being transferred in plain text?: [131065960080] |Thanks.:) [131065970010] |Specifying the password that way is just insecure on your machine. [131065970020] |Over the network it's just the same way that MySQL clients connect to your database. [131065970030] |As far as I read, the username and password are hashed, so you are not sending your password in plain text. [131065970040] |In your situation I would write a simple wrapper script, something like [131065980010] |Need further explanation on TIME_WAIT [131065980020] |Hi Unix/Linux Gurus out there! [131065980030] |I need a solid proof that TIME_WAIT (a lot of it actually) is the real culprit in the slowdown in one of our servers. [131065980040] |The server is hosted on Parallels Baremetal virtualization, and the actual server is a VM: CentOS5 with dual CPU and 2GB RAM. [131065980050] |A week ago, we started to notice that it was too slow that even doing an 'ls' on a directory with just a few files in there (around 20) would need around 1.5 seconds to display the results. [131065980060] |I tried doing vmstat but it doesn't seem to be even using it's swap. [131065980070] |No bottlenecks on the network. [131065980080] |But running top, you'd see java mostly hogging the resource. [131065980090] |Java is needed since this VM is our hudson server. [131065980100] |One of my colleagues tried checking the connections via [131065980110] |And noticed that there where a lot of connections in TIME_WAIT...around 300+. [131065980120] |So we tried applying some of the recommendations in this page particularly that of TCP_FIN_TIMEOUT, TCP_KEEPALIVE_INTERVAL &TCP_KEEPALIVE_PROBES. [131065980130] |The connections in TIME_WAIT went lower but still fluctuates between 220 to 280(maybe due to the fact that a new connection is added from time to time and other connections in TIME_WAIT is not yet "timed-out"). [131065980140] |Perhaps we could try adding TCP_TW_RECYCLE &TCP_TW_REUSE later when we don't see any improvement. [131065980150] |Now going back to my main question: is there a solid evidence that a lot of TIME_WAIT'ed connections eat up a lot of RAM? [131065980160] |Thanks in advance. [131065990010] |A connection in the TIME_WAIT state is simply waiting to see if any last straggling data packets make their way through the network from the other end, so that they don't get mixed in with another connection's packets. [131065990020] |It doesn't actually do anything with those packets. [131065990030] |So if anything, a TIME_WAIT connection uses fewer resources than an open connection. [131065990040] |A well-provisioned webserver these days can handle over 10,000 simultaneous connections (note that that was written in 2003, and Moore's Law keeps on marching). [131065990050] |Since, if anything, a connection in the TIME_WAIT state will use up less memory than an open connection, 300 connections in TIME_WAIT should be nothing. [131065990060] |For more info on TIME_WAIT, see http://tangentsoft.net/wskfaq/articles/debugging-tcp.html and http://developerweb.net/viewtopic.php?id=2941. [131065990070] |Meanwhile, I wonder how your disk I/O usage looks. [131065990080] |Heavy disk I/O can slow down the Linux kernel far more easily than heavy CPU usage, in my experience. [131065990090] |You may want to look into the iostat and dstat tools, and see what they tell you. [131066000010] |Using an incrementing variable in a bash command line for loop? [131066000020] |I'm using a bash commandline for-loop to concat a group of files together, and I'd like to append an incrementing digit. [131066000030] |Something like this: [131066000040] |So the output would like this: [131066010010] |What you need to do is: [131066010020] |or use [131066020010] |Processed with the script: [131066020020] |Writes the following in files.grp: [131066030010] |Incrementing a variable in a for loop in shell is often a mistake. [131066040010] |Install MySQL from Bash Script [131066040020] |I'm coding a bash script to automate the process of deploying VPS servers but I'm having some trouble while trying to install MySQL from either aptitude/apt-get or yum, this is what I have so far: [131066040030] |It seems that the script keeps running ad infinitum, I suspect the problem is because the mysql-server package seems to bring up a wizard to specify the MySQL root password, but I've no idea how to overcome or fill the password from within the script. [131066040040] |Does anyone know how I can work around this problem? [131066050010] |You can use the DEBIAN_FRONTEND environment variable. [131066050020] |or if you will run more than 1 install you might want to add an export to the top of your script [131066060010] |How to upgrade mono on openSuse [131066060020] |I have a virtual machine running of openSuse 11.2 that has mono 2.6.4, I use this VM as a test server to test asp.net applications under Apache mod_mono. [131066060030] |I wanted to upgrade (in the same virtual machine) to mono 2.8.2. I downloaded several rpm files from http://ftp.novell.com/pub/mono/download-stable/openSUSE_11.2/i586/ but I'm in a dependency "loop", don't know which package to install in the correct order... [131066060040] |(Did I mention that I know very little of suse?) [131066060050] |Edit: Is it possible to find a way to upgrade it without network connectivity? [131066060060] |Thanks! [131066070010] |Go to this page at opensuse.org and click "1-Click Install" button on mono-complete-2.8.2 meta package. [131066070020] |Then all your loop dependencies will be solved automatically by YaST manager. [131066070030] |It is a usual user-friendly way to install packages on openSuSE. [131066080010] |You can either use yast (interactive) or zypper to directly update software from a repository. [131066080020] |This will avoid any dependency problems. [131066080030] |This refreshes the repository and then updates the system. [131066080040] |Adding a package after the update (up) command is optional and can be used to only update a specific one. [131066090010] |Recommended mailing list manager for use with Postfix [131066090020] |I have a small VPS (256MB RAM, 12% cap CPU) with Postfix installed that I use to host my mail domains. [131066090030] |I want to run a mailing list manager, but would like to avoid Mailman because of its big fingerprint. [131066090040] |Plus, I sysadmin a couple of them and would like to try new things :-) [131066090050] |Any recommendation? [131066100010] |The Postfix web site lists several mailing list managers. [131066100020] |Majordomo is listed there, and is pretty common, and I know it has run on systems with fewer resources than you've mentioned. [131066110010] |I used Mailman on Postfix with success. [131066110020] |Its command-line administration is easily understandable. [131066110030] |Caveat: they were all low-traffic lists. [131066120010] |Why put some config info in conf/httpd.conf and some in files in the conf.d folder? [131066120020] |The main apache config file is in /etc/httpd/conf/httpd.conf on my CentOS system an in there is a line: [131066120030] |Inside conf.d is mostly files that do something like this: [131066120040] |But there are also other sites that are setup in there to and have their own config files. [131066120050] |Was this not well thought out or am I missing something? [131066130010] |I've found that there's not a very well documented specification on where exactly what configuration files go in apache. [131066130020] |Especially since they've recently changed how the default does it. [131066130030] |Did you install from source or from a package? [131066130040] |Packages, especially debian packages, seem to not follow the apache source at all. [131066130050] |It's been a while since I've done much with apache, but if I remember, conf.d/ is where you would put loading the daemon modules like what you've posted, or ffi or stuff like that. [131066130060] |While conf/ is where site specific configuration files go. [131066130070] |This is what mine looks like, this is installed from source. [131066130080] |But also note that this isn't a live server and I built this apache install specifically to test Wt [131066140010] |Since there are several packages that can provide functionality to Apache's HTTPd, the base package installs an httpd.conf that provides most of the basic settings, and other packages, such as mod_ssl, nagios and php have configuration files that need to be included per-package. [131066140020] |The Red Hat packagers use the conf.d directory to drop the configuration in for those packages, otherwise they'd need to modify the httpd.conf for each package, which is something difficult to automate during package installation. [131066150010] |Separating configuration files is the approach to manage them. [131066150020] |By putting configuration lines specific to a module into their own files it become much easier to enable and disable modules. [131066150030] |It also helps managing them, because now you only have a small configuration file to edit. [131066150040] |(Imagine opening up a 500 line httpd.conf and looking for an incorrect option.) [131066150050] |Different systems seem to have different ways to separate apache configuration files. [131066150060] |For example on my Gentoo there are modules.d/ and vhosts.d/, while on my Ubuntu there are conf.d/, mods-available/, mods-enabled/, sites-available/ and sites-enabled/. [131066150070] |You can guess what they do by the name, or look inside httpd.conf for Include lines. [131066160010] |What is the difference between LILO and GRUB ? [131066160020] |I am running a web server under Debian and I currently have GRUB installed. [131066160030] |Should I consider using LILO instead of GRUB? [131066160040] |And what are the advantages of each? [131066170010] |You should use GRUB, or probably GRUB2 as it is much newer. [131066170020] |Grub advantages over LILO include support for larger disks (you don't have to have your boot partition in the beginning of disk) and support for EFI boot. [131066170030] |If you are using old computer with working LILO, there is no specific reason to upgrade to GRUB. [131066170040] |Another reason: there is no updates for LILO, and practically no support. [131066170050] |Or even a website. [131066180010] |LILO has a simpler interface and is easier to wrap your head around. [131066180020] |GRUB is more featured and handles odd configurations better. [131066180030] |The LILO bootstrap process involves locating the kernel by in essence (it's more complicated than this) pointing to the first logical-sector of the Kernel file. [131066180040] |The GRUB bootstrap process is more filesystem aware and can locate a kernel file in a filesystem without having to specify a logical-sector. [131066180050] |There is a reason nearly everyone is using GRUB these days, and that's because it's less fragile and handles edge-cases better. [131066190010] |I guess main advantage (for me) of GRUB are [131066190020] |
  • I don't have to remember to run 'lilo' after kernel update. [131066190030] |GRUB have real support for filesystems so it can find kernel on disk.
  • [131066190040] |
  • Commandline. [131066190050] |GRUB allows to enter commandline which tends to be handy if I mess with configuration. [131066190060] |Sometimes it saves live.
  • [131066190070] |Main advantages of LILO: [131066190080] |
  • Support any filesystem as it workarounds this concept
  • [131066190090] |
  • It is small
  • [131066190100] |I'd say that in 99% of cases you prefer GRUB. [131066200010] |Desktop overlay program showing CPU, HDD, etc. stats [131066200020] |In different screenshots of people's linux desktops, I've seen different apps that overlay the desktop with information about their computer. [131066200030] |Often this gadget/app shows CPU and HDD information. [131066200040] |Sometimes it has network and temperature information as well. [131066200050] |I've seen these a lot but they often have different looks and different information. [131066200060] |What program does this? [131066200070] |Is it built-in to any linux distro? [131066210010] |I use conky to display date, battery, cpu, ram and swap information. [131066210020] |You can find my conky file here or take a look at a thread about conky configs in the arch-linux forum. [131066210030] |There you find many different configs and screenshots of conky in use. [131066220010] |I use the conky-colors theme/scripts for one of my desktops. [131066220020] |It's a fairly user-friendly way to begin with conky. [131066230010] |Why would anyone not set 'histappend' in bash? [131066230020] |After finding out what this shopt -s histappend means, it seems a very sane setting, and I'm surprised that it isn't default. [131066230030] |Of course I may be shortsighted, so why would anyone not use such a setting? [131066230040] |Why would anyone want to wipe their history on each shell exit? [131066240010] |For historical compatibility, I guess. [131066240020] |The histappend option didn't exist until bash 2.0. [131066250010] |Well, when histappend is not set, this does not mean that the history is wiped on each shell exit. [131066250020] |Without histappend bash reads the histfile on startup into memory - during operation new entries are added - and on shell exit the last HISTSIZE lines are written to the history file without appending, i.e. replacing the previous content. [131066250030] |For example, if the histfile contains 400 entries, during bash runtime 10 new entries are added - histsize is set to 500, then the new histfile contains 410 entries. [131066250040] |This behavior is only problematic, if you use more bash instances in parallel and if you don't care about that in that case the history file only contains the contents of the last exiting shell. [131066250050] |Independent of this: There are some people who want to wipe their history on shell exit because of privacy reasons. [131066260010] |Need to upgrade svn on centos [131066260020] |Possible Duplicate: Need to upgrade svn on centos [131066260030] |I have an error when I run svn up [131066260040] |svn this client is too old to work with working copy...please get a newer subversion client [131066260050] |This is on centos. [131066260060] |I need to update svn. [131066260070] |How do I do this? [131066260080] |yum update does the same thing. [131066260090] |Additionally, only one directory is supposedly affected, but deleting this from the repository doesn't fix the issue. [131066270010] |Need to upgrade svn on centos [131066270020] |I have an error when I run svn up [131066270030] |svn this client is too old to work with working copy...please get a newer subversion client [131066270040] |This is on centos. [131066270050] |I need to update svn. [131066270060] |How do I do this? [131066270070] |yum update does the same thing. [131066270080] |Additionally, only one directory is supposedly affected, but deleting this from the repository doesn't fix the issue. [131066280010] |If you originally pulled down the subversion package through Yum you could simply try yum update. [131066280020] |And if you didn't pull it down through Yum originally you can try to pull it down now with yum install subversion [131066290010] |Probably apt or something along those lines could update it. [131066290020] |I'm not familiar with the package manager on CentOS. [131066300010] |RHEL 5.6 has Subversion 1.6 in it (from 1.4 in 5.5), so as soon as CentOS has released the RHEL5.6 package it'll just automatically upgrade. [131066300020] |If you can't wait, then you can rebuild your own package from Red Hat's source package: http://ftp.redhat.com/pub/redhat/linux/enterprise/5Client/en/os/SRPMS/subversion-1.6.11-7.el5.src.rpm [131066310010] |http://the.earth.li/pub/subversion/summersoft.fay.ar.us/pub/subversion/latest/ also has the latest RPMs (no rebuilding needed) for CentOS 4 and 5 which is what I've always used. via [131066320010] |Hi syn4k, [131066320020] |Try to search in http://rpm.pbone.net for "subversion-1.5" if that's what you need or just change it to "subversion-1.6" if in case you need that version limiting your search to "Centos 5" or "RHEL 5" and download the rpm that corresponds to your machine. [131066320030] |Install it via rpm as follows: [131066320040] |rpm -i subversion-package-name-here.rpm [131066320050] |If it says it conflicts with your existing subversion, check for the name of your existing subversion: [131066320060] |rpm -qa| grep subversion [131066320070] |and use the result for uninstalling the old package: [131066320080] |rpm -e subversion-old-package.bhabha [131066320090] |Then go back to installing the rpm as I detailed above. [131066320100] |Hope this helps, [131066320110] |Ismael Casimpan :) [131066330010] |Issues with ardour and jackd [131066330020] |I'm trying to play with ardour. [131066330030] |When I started it up, it complained that jackd isn't running, so I ran jackd -d alsa, which displayed: [131066330040] |It seems this didn't help much because running ardour2 displayed the following: [131066330050] |[note] These are native Debian packages. [131066340010] |Filtering invalid utf8 [131066340020] |I have a text file in an unknown or mixed encoding. [131066340030] |I want to see the lines that contain a byte sequence that is not valid UTF-8 (by piping the text file into some program). [131066340040] |Equivalently, I want to filter out the lines that are valid UTF-8. [131066340050] |In other words, I'm looking for grep [notutf8]. [131066340060] |An ideal solution would be portable, short and generalizable to other encodings, but if you feel the best way is to bake in the definition of UTF-8, go ahead. [131066350010] |I think you probably want iconv. [131066350020] |It's for converting between codesets and supports an absurd number of formats. [131066350030] |For example, to strip anything not valid in UTF-8 you could use: [131066350040] |iconv -c -t UTF-8 output.txt [131066350050] |Without the -c option it'll report problems in converting to stderr, so with process direction could you save a list of these. [131066350060] |Another way would be to strip the non-UTF8 stuff and then [131066350070] |diff input.txt output.txt [131066350080] |for a list of where changes were made. [131066360010] |Relying on Perl, and baking in the definition of UTF-8: [131066370010] |How to force 'yum search' to use local metadata/cache? [131066370020] |A lot of times when I run yum search, it tends first to download the metadata from a repository. [131066370030] |How do I force it to only search the local cache? [131066370040] |I prefer quick results to inaccurate data in this case, and am on a slow line. [131066380010] |Well, by default the yum cache is disabled, read down for more details, a trick to improve the speed is the following. [131066380020] |Will not update the local cache, therefore your search will be a little bit faster. [131066380030] |I prefer to create a local file with all the packages doing this way: [131066380040] |Then I can grep what I'm searching: [131066380050] |That's all... [131066380060] |From time to time I'll execute again the yum list all, to update the list. [131066380070] |Important note [131066380080] |From the Fedora manual [131066380090] |
  • List item
  • [131066380100] |By default, current versions of yum delete the data files and packages that they download, after these have been successfully used for an operation. [131066380110] |This minimizes the amount of storage space that yum uses. [131066380120] |You may enable caching, so that yum retains the files that it downloads in cache directories. [131066380130] |Caches provide three advantages: [131066380140] |By default, yum stores temporary files under the directory /var/cache/yum/, with one subdirectory for each configured repository. [131066380150] |The packages/ directory within each repository directory holds the cached packages. [131066380160] |For example, the directory /var/cache/yum/development/packages/ holds packages downloaded from the development repository. [131066380170] |If you remove a package from the cache, you do not affect the copy of the software installed on your system. [131066380180] |1.1. [131066380190] |Enabling the Caches [131066380200] |To configure yum to retain downloaded files rather than discarding them, set the keepcache option in /etc/yum.conf to 1: [131066380210] |Refer to Section 9.1, “Editing the yum Configuration” for more information on editing the yum configuration file. [131066380220] |Once you enable caching, every yum operation may download package data from the configured repositories. [131066380230] |To ensure that the caches have a set of package data, carry out an operation after you enable caching. [131066380240] |Use a list or search query to download package data without modifying your system. [131066390010] |What's causing VirtualBox OSE to hang my machine? [131066390020] |I'm using VirtualBox OSE and recently, when I run Ubuntu 10.10 on it, my machine tends to hang, forcing me to hard-reset it (not good). [131066390030] |How do I start finding where the problem is? [131066390040] |Here's the last line from "/var/log/syslog", before the reset: [131066390050] |notes: [131066390060] |
  • VirtualBox OSE is version 3.2.10
  • [131066390070] |
  • I use 32-bit 2.6.37 kernel on Debian Squeeze
  • [131066390080] |
  • I can't reproduce this problem when using Fedora 14 VM
  • [131066400010] |this is a shot in the dark, but we used to have these inexplicable issues with virtual box in connection with using bridged networking and offloading. [131066400020] |Try [131066400030] |this should be fixed in the 4.x series as far as I know. [131066410010] |Why are underscores not allowed in usernames in some distros (Debian for example) [131066410020] |So why has the underscore been considered a bad character for user names in Debian (and possibly other distributions) while it has been removed from adduser's NAME_REGEX in Ubuntu? [131066420010] |I'm using Debian Squeeze and I managed to create a user with underscore, adduser user_1. [131066420020] |Why do you say they are not allowed? [131066430010] |A similar question has already been answered here [131066430020] |Theoretically you can use almost any ASCII character you want as username, but, to avoid some kind of bugs, like to one mentioned in the above article, you can set that regular expression that avoid certain issues. [131066430030] |I hope it helps... [131066440010] |POSIX specifies the usage of a portable set of characters for user and group names. _ - . are allowed characters, NAME_REGEX checks if the username does only contain specified characters. [131066440020] |The distribution developers define, if further characters are denied. [131066440030] |Ubuntu, for example, does forbid the use of . by default. [131066440040] |Adding this restriction avoids interference with other system tools, which may interpret special characters. [131066440050] |Think of the variable $PATH, when you have a user with the name my:user and add your home directory to $PATH: [131066440060] |/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/my:user/bin [131066440070] |The directories /home/my and user/bin would (probably) not exist. [131066440080] |Further, /etc/passwd would have two : more than needed. [131066440090] |Edit: Debian's adduser (version 3.110) uses /^[_.A-Za-z0-9][-\@_.A-Za-z0-9]*\$?$/ for checking usernames, _ is allowed as long as NAME_REGEX does not forbid it. [131066450010] |Distributed package repository for Linux? [131066450020] |Many times I have seen the note telling users not to update their repositories too often (i.e more than once a day) because that will put a stress too large for the servers. [131066450030] |Also I understand that it takes monstrous machines for such repositories. [131066450040] |I am wondering if there is something like BitTorrent for package management? [131066450050] |Or if there isn't, is it feasible to have such a system? [131066450060] |(I'm thinking about a system where each user keeps the packages that they have and serve it in the same fashion as BitTorrent.) [131066460010] |Also, if this is within a corporate setting, you can setup local mirrors and/or proxies to alleviate this problem. [131066460020] |There is no need for a bittorrent that way. [131066460030] |If you are using a Debian based distribution, you can already just use apt-cacher/apt-proxy to do it. [131066470010] |You can try setting up a sort of apache based load balancer and specify several different mirrors as the "back" end. [131066470020] |Each time you hit your proxy a different real mirror will be hit, thereby spreading your requests among several mirrors. [131066470030] |But, most mirrors only update once a day. [131066470040] |A single mirroring job can be lengthy, especially when there's a large update. [131066470050] |For mirror operators more than once per day usually is not worth it, and mirroring frequently puts much more stress on the upstream than a simple apt-get update. [131066470060] |Remember, mirrors and distributions are often done on a volunteer basis. [131066470070] |Those volunteers have to pay big bucks for bandwidth, storage, electricity, cooling, etc. [131066470080] |I operate a Debian mirror myself. [131066470090] |Please, help us out and don't slam us too hard. [131066480010] |Indeed it's possible and of course it exists, but all of the existing projects are still just experiments. [131066480020] |I found two interesting papers: [131066480030] |
  • A P2P-Based Architecture for Secure Software Delivery Using Volunteer Assistance
  • [131066480040] |
  • Simulation platform for distributed package management network. [131066480050] |Of specific interest is a chapter where the concept of Repository-less Package Management Network is introduced.
  • [131066480060] |So yes, it's possible and it's indeed a really interesting subject. [131066490010] |Is there an easy way to update information in known_hosts when you know that a hosts key has changed? [131066490020] |If a host's operating system has been re-installed and had its public key regenerated sshing to it will of course fail because the new key doesn't match the old one. [131066490030] |Is there an easier way to tell ssh that you know that the host's key has changed and that you want it to be updated. [131066490040] |I think it feels a bit error-prone to use a text editor or something like sed to remove the offending line. [131066500010] |Use ssh-keygen -R hostname to remove the hostname from your known_hosts file. [131066500020] |The next time you connect, the new host key will be added to your known_hosts file. [131066510010] |xorg memory leaks [131066510020] |On my desktop Xorg leaks a lot of memory. [131066510030] |I use to let this machine on for days (weeks, months ...), but about once a week I'm forced to restart xorg, because it is taking too much memory. [131066510040] |Am I the only one affected by this issue? [131066510050] |Why does this is happen? [131066510060] |Is it xorg to be bugged, or is it something else (maybe some xorg modules)? [131066510070] |Can I do anything to avoid it? [131066510080] |Some more infos: [131066510090] |I'm on ArchLinux, kernel 2.6.36, nvidia drivers 260.19.36, gnome 2.32.1. [131066510100] |Anyway I've got this issue from a lot of time, like more than one year. [131066510110] |Here is how the Xorg process is shown by top after about 9 days of uptime... [131066510120] |I've got only a few open windows. [131066510130] |229MB of resident memory looks like a lot to me (and it will keep growing with time). [131066510140] |As soon as I reboot Xorg I'll also add process stats at the beginning of its life. [131066510150] |Here is as soon as my Gnome desktop gets up: [131066520010] |Here is an Ubuntu bug report for a memory leak in Xorg: https://bugs.launchpad.net/ubuntu/+source/xorg-server/+bug/565981 [131066520020] |Does that help at all? [131066530010] |Hi, I have that same problem and similar OS. Mem leaks is probably in gnome. [131066530020] |I have huge mem leaks in Xorg mem space when I use applet to change monitor resolution. [131066530030] |I dont know where is problem but if you will disable this applet than mem leaks will not appear any more. [131066540010] |Some memory leaks in the X server are applications' faults; if they create server-side Pixmaps and don't clean up after themselves, they stick around forever. [131066540020] |(This is intended to be a feature, so e.g. GNOME could install a set of standard Pixmaps once and then use them from any program by passing around their XIDs, as an optimization.) xrestop can be used to track these down. [131066550010] |Excluding a directory when zipping files [131066550020] |I've got a bash script that does something like this: [131066550030] |The directory it's backing up is a SVN archive (it used to be CVS back in the day). [131066550040] |I've been unable to get it to exclude .svn and it's contents. [131066550050] |What's the cleanest way to exclude .svn (recursively through the entire tree)? [131066560010] |-x also accepts a list of excludes. [131066560020] |Alternatively, create a filelist with your excludes and add them there. [131066560030] |The exclude.lst: [131066560040] |with the command: [131066570010] |Note that your exclude patterns as posted aren't quite right, they exclude any file or directory whose name contains CVS. [131066580010] |Does Fish have a "magic space"? [131066580020] |I gather that bash has a "magic space" function, where if I do e.g. sudo !! it will blow in sudo ./my_last_command. [131066580030] |Does something similar exist in fish? [131066590010] |From the FAQ in the source (I can't find any documentation online): [131066590020] |Why doesn't history substitution ("!$" etc.) work? [131066590030] |Because history substitution is an awkward interface that was invented before interactive line editing was even possible. [131066590040] |Fish drops it in favor of perfecting the interactive history recall interface. [131066590050] |Switching requires a small change of habits: if you want to modify an old line/word, first recall it, then edit. [131066590060] |E.g. don't type "sudo !!" - first press Up, then Home, then type "sudo ". [131066600010] |Best place to get nVidia driver update for CentOS 5.5 [131066600020] |What is the best place to get an updated nVidia video driver for CentOS 5.5? [131066600030] |Is there a CentOS package available that will update it? [131066600040] |Or is it best to download it directly from nVidia? [131066600050] |Also, I do not have an internet connection on the machine, so it will have to be a manual download and installation. [131066600060] |Thanks, DemiSheep [131066610010] |I prefer to use the packages found at ELRepo. [131066610020] |You will probably need the kmod-nvidia and the nvidia-x11-drv package. [131066620010] |I just used the nVidia driver directly from nVidia, following their directions. [131066630010] |Storing `find` parameters in a variable [131066630020] |I'm running the following bash command: [131066630030] |I'm running the same parameters in another call to find later on in the script. [131066630040] |Is there any way to store the \( -iname '*.aif' -o -iname '*.pdf' -o -iname '*.exe' \ -o -iname '*.mov' -o -iname '*.doc' \) part in a variable? [131066640010] |In ksh, bash or zsh, use an array variable. [131066640020] |In other shells, there's no good way (there are complex and brittle ways but I don't recommend them unless you really need them). [131066640030] |If you need a single array, you can use the positional parameters: [131066650010] |How can I randomize file names in a given directory? [131066650020] |Ok, this may sound like an odd request but it really has a good purpose! [131066650030] |I have a Panasonic Plasma TV with an SD card slot, via which I can display a slideshow of JPEG photos. [131066650040] |This is useful for preventing burn-in, if I step away from the TV for a few moments while cooking, or if I answer the phone, or whatever, it's handy to just start the slideshow to prevent a static image from showing up, and also let me see photos while I do something else. [131066650050] |The problem is the slideshow always starts from the first photo alphabetically, meaning I always see the same images and never get to the end. [131066650060] |What I'd like to do is have a script on my Unix machine which I can run on the SD card (mounted at /media/sdcard) and have it randomize the names of all the files under that directory. [131066650070] |Doesn't really matter how it randomizes them but I'd prefer purely numeric names (for no apparent reason) [131066650080] |Is there an easy way to accomplish this? [131066650090] |Something like find /media/sdcard -exec mv {} rand() \;? [131066650100] |For bonus points, any way to prevent the script from overwriting two files in the event that there's a collision? [131066650110] |These are copies of photos so if I lose one I still have the original, but still that would be inconvenient... [131066660010] |There's nothing Unix specific here... [131066660020] |This'll break if there are folders in input_path (which you can't hardlink), but you can easily rewrite the file "hunting" logic to something much more robust (recursive search etc.) with the help of os.path and os.walk. [131066670010] |Replace mv by ln or ln -s and possibly a different target directory as you see fit. [131066670020] |Note that since find may still be traversing the directory by the time mv runs, you shouldn't rename or link the file inside the same directory. [131066670030] |shuf is specific to GNU coreutils, the rest is POSIX. [131066670040] |If you're not on Linux or Cygwin, see alternatives in awk or Perl. [131066680010] |I see an accepted answer and another good answer. [131066680020] |Anyway here is a script that I have been using. [131066680030] |It randomizes the names by changing them to a hash sequence computed by the full path. [131066680040] |Because the full path is unique to each system it is expected that file names don't clash (this is not fool-proof and adding a check would be easy but I didn't bother). [131066680050] |This script renames files recursively but leave directory names intact. [131066680060] |Use it by passing the folder to apply recursive rename as a parameter. [131066690010] |How can I set env variables so that KDE recognizes them? [131066690020] |I need to set some environment variables such as the XDG spec ones, before KDE starts in such a way that kwin and any apps run from KDE will inherit them. [131066690030] |Where could I do this, and how? [131066700010] |Put them in a .sh file in ~/.kde/env/ (possibly ~/.kde4/env/ or similar; varies by distribution). [131066710010] |Shift+Ctrl+[Left|Right] to highlight text, then type, ignores the first two characters typed [131066710020] |I'm using openSuse. [131066710030] |It was installed for me by the IT group here at work this week. [131066710040] |I routinely use Shift+Ctrl+ some arrow key to highlight text, and I'm in the habit of simply typing in order to replace the text that has been highlighted. [131066710050] |If I highlight the text with the mouse and start typing, everything works fine. [131066710060] |If I highlight the text using the keyboard combination, then the first character I type deletes the highlighted text and the second does nothing, then the remainder of the characters I type are put as a replacement to the highlighted text. [131066710070] |This behavior appears to happen regardless of application. [131066710080] |I've seen it in Google chrome (although not the URL bar), Firefox, and in Eclipse text editors. [131066710090] |I have no idea what's going on, but it's really annoying and slowing me down in Eclipse. [131066710100] |I can get around in linux, but I'm no guru. [131066710110] |Where should I look to figure out what is going on? [131066710120] |Update: I'm in Gnome. [131066710130] |I've seen the behavior in Open Office writer, Google Chrome, Firefox, Eclipse, Thunderbird. [131066710140] |I did not see it in Tomboy Notes or gedit. [131066720010] |So you would expect the first and second characters to overwrite the selected text, right? [131066720020] |What does xev print? [131066720030] |(Run it from a terminal, then move the mouse over the window, then press Ctrl Shift Left Left a b) [131066720040] |For me, it does this. [131066720050] |Pressing and holding Ctrl then Shift... [131066720060] |then Left, Left... [131066720070] |then letting go of Ctrl and Shift... [131066720080] |then pressing a, b [131066720090] |I would look especially at the last two blocks, i.e. when releasing Ctrl and Shift and then when pressing a b to see if there are any differences. [131066720100] |Other thoughts: [131066720110] |
  • do you have Sticky Keys on?
  • [131066720120] |
  • do you have Ctrl+Shift set to change the keyboard layout or language?
  • [131066730010] |How do I delete everything in a directory? [131066730020] |I'm sorry for asking such a basic question: [131066730030] |How do I delete everything in a directory, including hidden files and directories? [131066730040] |Right now, I use the following: [131066740010] |Try rm -rf *?*. [131066740020] |This will delete normal and hidden files. [131066750010] |Each of the three pattern expands to itself if it matches nothing, but that's not a problem here since we want to match everything and rm -f ignored nonexistent arguments. [131066750020] |Note that .* would match ... [131066760010] |if you are in the directory: [131066760020] |cd .. &&rm -rf dir &&mkdir dir &&cd dir [131066760030] |otherwise: [131066760040] |rm -rf /path/to/dir &&mkdir /path/to/dir [131066770010] |The best answer is: Don't do that. [131066770020] |Recursively remove the directory itself, then recreate it. [131066770030] |It's more reliable and easier for other people to understand what you're trying to do. [131066770040] |When you re-create the directory it may have a different owner, group and permissions. [131066770050] |If those are important be careful. [131066780010] |Oh my Zsh [131066780020] |Again, this is for Zsh only. [131066790010] |How about using find. [131066790020] |I think this is generally a good choice, when you have to dig through sub-directories. [131066800010] |Simple and straight forward: [131066800020] |Includes directories and hidden files. [131066800030] |At least gnu-find knows -delete, your find may differ. [131066810010] |Install Git on CentOS 5.5 [131066810020] |I have added EPEL repo on my YUM. [131066810030] |When run yum install git-core ... [131066810040] |But, when run ... [131066810050] |Then ... [131066810060] |Any idea? [131066820010] |So, [131066820020] |I was recently setting up a cpanel instance on this server, and I was pretty surprised as I have installed git without issue before on CentOS boxes before. [131066820030] |So cpanel has blocked all perl packages from being installed or updated because they don’t want updates to break or conflict with their packages. [131066820040] |Thankfully yum provides a nice one time workaround for this kind of situation. [131066820050] |yum --disableexcludes=main install git [131066830010] |Your basic problem here is that you're mixing the RPMforge and EPEL add-on repos. [131066830020] |As you see, they're not always compatible. [131066830030] |Pick one and use that. [131066840010] |Just remove the perl* from your /etc/yum.conf file..then rerun your yum install. [131066840020] |Make sure you remember to put it back in the conf file after you've installed git or some subsequent update may clobber something cpanel needs. [131066850010] |Using /etc/hosts or iptables to redirect site to https:// version [131066850020] |I'd like to have my system automatically take me to https://facebook.com even if I put http://facebook.com into my browser. [131066850030] |I can get /etc/hosts to redirect to me to different domains, but it seems to ignore it if I put https:// into it. [131066850040] |This works, and it redirects facebook.com to google.com: [131066850050] |This does not [131066850060] |My guess is that you can't use text at all in the part where the IP is, but how do I force it to use https? [131066850070] |Is this possible in iptables? [131066860010] |No, it is not possible using iptables. [131066860020] |If you used it to redirect port 80 to port 443, your browser would still speak to it using http rather than https, and all you would get is garbage. [131066860030] |Maybe something using a Squid proxy would work. [131066860040] |You could make it a transparent proxy if you can't change everyone's proxy settings. [131066860050] |Or, if it's just for Facebook, there is a new per-user setting to force HTTPS that might work for you when it is rolled out. [131066860060] |Or, if you're using Firefox, check out HTTPS Everywhere. [131066870010] |Help with piped program in sendmail's /etc/aliases [131066870020] |Hi everyone, [131066870030] |I'm trying some sort of auto-subscription via a homegrown script. [131066870040] |I know it can be achieved by mailing lists such as Mailman but I also want to learn at the same time on how to do it by hand. [131066870050] |Here's the simple script: [131066870060] |I attached the above script in /etc/aliases using the syntax: [131066870070] |and run [131066870080] |It's still a very bare script. [131066870090] |Just testing out if I my syntax in /etc/aliases is correct. [131066870100] |But when I tried emailing subscribe@mydomaintests.tld, it returns something like: [131066870110] |I'm using Lotus Notes so my google search directed me to this link. [131066870120] |Apparently, something to do with the file...Not sure. [131066870130] |The command is executable, in fact I tried making it 777 and even created the mail_received.txt in the directory just to ensure I have no file permission problem but still the same. [131066870140] |Can anyone pitch in please? [131066870150] |Thanks in advance. [131066880010] |is wrong. [131066880020] |If you're trying to print lines from STDIN to mail_received.txt, you would need: [131066880030] |because print with one argument takes the argument to mean the list to print, not the filehandle to print it to. [131066880040] |Also, no need for quotes around the filehandle name in open. [131066880050] |Just use RCV_MAIL. [131066890010] |If you're running a sendmail with smrsh set up (common in a lot of default configurations) you will need to run the piped command out of /etc/smrsh/. [131066890020] |It can either be a symlink or a copy of the script, but if sendmail has 'smrsh' defined, it will need to be run from that directory. [131066890030] |For example: [131066890040] |Check the sendmail documentation on smrsh for more details. [131066900010] |You need to quote the "alias" if it has a space in it: [131066900020] |or remove the space: [131066910010] |Learn Linux System Programming by doing projects [131066910020] |I have only a very basic idea about linux system programming. [131066910030] |I have not done any real projects using linux system programming. [131066910040] |In my current company I do system admin type work, but I am more interested in Linux System Programming. [131066910050] |I want to do some projects on my own, so that I could put those projects in my resume when I apply for jobs at another companies. [131066910060] |Kindly tell me the whether there is any projects where I could learn more linux system programming by doing some real programming stuff. [131066910070] |Please note that I only have experience in C programming and not in Linux System Programming. [131066910080] |But I know very basic things about linux system programming. [131066910090] |Thanks. [131066920010] |C is fine for system programming. [131066920020] |As a starting point you could take a look at the books from this questions. [131066920030] |As system programming is a broad field, perhaps they give you a hint where you could start. [131066920040] |The ultimate project would definitely be the linux kernel, but it's hard as your first project. [131066920050] |A smoother entrance to the field would be, to rewrite some command line tools. [131066920060] |Take ls or cat or some other command line tool, and try to rewrite it. [131066920070] |Start with the most basic functionality of the command and then you can try to add more functionality over time. [131066920080] |During this process you might get ideas to improve the existing tools or to do a complete new one on your own. [131066930010] |In your system admin type work, does some task you do either puzzle you (How does that work?) or irritate you (Shouldn't that work better/faster?)? [131066930020] |Find several of those tasks, identify the very basic feature that you don't understand, or that irritates or puzzles you. [131066930030] |Try to implement the puzzling, irritating or slow feature in C. [131066930040] |You will get a more thorough education if you have something practical motivating you, and you will have a stopping point. [131066930050] |When you've implemented your very basic feature in C, you can stop, evaluate what you've done, then pick another task that still puzzles you, or irritates you. [131066930060] |In light of what you've learned, several tasks will now seem different than they did. [131066940010] |Task manager keyboard short cut in Linux? [131066940020] |Is there any keyboard shortcut for the "task manager" (like Alt+Ctrl+Del in windows) when my machine goes into a crashed state? [131066950010] |Here are a few useful shortcuts you can try: [131066950020] |
  • displays table of processes
  • [131066950030] |
  • converts the pointer to a skull-and-crossbones and will kill the process of the window you click on
  • [131066950040] |
  • kills the X-server
  • [131066950050] |
  • shutdown the system and reboot
  • [131066960010] |I am going to assume by "my machine go into crashed state" you mean that whatever task is taking up the display you are looking at has stopped responding. [131066960020] |(In general, when something crashes on Linux, only that thing crashes and everything else keeps running. [131066960030] |It's very rare that the entire machine comes to a halt.) [131066960040] |When all else fails, I like to switch back to a standard terminal interface (text mode as opposed to GUI) by hitting CTRL+Alt+F1. [131066960050] |This brings up a login prompt. [131066960060] |I then login, and enter the command top to see what is running. [131066960070] |The process at the top of the list is the one using the most CPU and usually the problem, so I kill it by pressing k, and entering the process ID (the numbers on the left). [131066960080] |I then go back to the GUI by pressing CTRL+Alt+F7 (or sometimes CTRL+Alt+F8, one of those two will work, but it might change). [131066960090] |If things are now working, I continue on, if not, I'll try again or may just force a reboot. [131066970010] |You can also use xbindkeys and define a binding to pop up top, htop, *top, gnome-system-monitor, etc. [131066970020] |Switching to a TTY (jwernerny's answer) is probably the best idea if your system or X server is acting up. [131066980010] |It's slightly related, but if you're dealing with a crashed system, you might want to invoke the Magic Sysrq key. [131066980020] |This way you can kill all processes, sync your disks, print out the active tasks, initiate a crash dump, and much more. [131066990010] |S3 sleep problems -- nVidia or Intel H67 (Sandy Bridge motherboard) issue? [131066990020] |I have a new Sandy Bridge i5-2500 and Intel H67 motherboard. [131066990030] |As the onboard video didn't work, I put in an older 8600 gts graphics card. [131066990040] |However, the S3 suspend won't work (and I can't test without the dedicated card). [131066990050] |I've had this experience with all other desktops (all of which have nvidia cards). [131066990060] |Any help diagnosing what might be the problem would be appreciated. [131066990070] |I've been trying a few s2ram parameters at en.opensuse.org/SDB:Suspend_to_RAM, but this is very time consuming and so far fruitless. [131066990080] |Namely, [131066990090] |
  • Of course, if you have experience with a H67 or P67 motherboard, that would be most useful.
  • [131066990100] |
  • If you have owned Intel desktop motherboards, how often does S3 work?
  • [131066990110] |
  • same with nVidia graphics cards
  • [131066990120] |
  • Does the power supply ever make a difference?
  • [131066990130] |Thanks very much! [131067000010] |linux: How can I view all UUIDs for all available disks on my system? [131067000020] |My /etc/fstab contains this: [131067000030] |There are several other disks on this system, and not all disks are being mounted to the correct location (For example, /dev/sda1 and /dev/sdb1 are sometimes reversed). [131067000040] |How can I see the UUIDs for all disks on my system? [131067000050] |Can I see the UUID for the third disk on this system? [131067010010] |There's a tool called blkid, [131067010020] |you can check this link for more info [131067010030] |http://liquidat.wordpress.com/2007/10/15/short-tip-get-uuid-of-hard-disks/ [131067020010] |In /dev/disk/by-uuid there are symlinks mapping each drive's UUID to its entry in /dev (e.g. /dev/sda1) [131067030010] |Why is GNU screen / byobu leaving garbage text in the shell during a reverse search? [131067030020] |I recently starting using GNU screen via Byobu but I think the problem is related to screen. [131067030030] |I first SSH into a server and then do a reverse search to run a commonly run command (dump a the database). [131067030040] |I've redacted some of the text, but because usually outputs (reverse-i-search)':` before the search, it pushed the line across the width of the terminal. [131067030050] |If I am happy with the search and accept the command, the rightmost text stays put. [131067030060] |See below: [131067030070] |So, why is this "garbage text" staying in the window? [131067030080] |It only happens in screen and only seems to happen for certain hosts that use my custom .bashrc formatting and don't have their own. [131067040010] |You are probably missing \[ and \] in your PS1. [131067040020] |It needs to go around every non-printing escape sequence, e.g. the escape sequences used to color things blue and yellow. [131067040030] |See the bash man page for details. [131067040040] |It's in the section titled "Prompting". [131067040050] |It's also documented in the info docs under Controlling the Prompt. [131067050010] |Where is the setting that disallows duplicate command history? [131067050020] |Let's say I type the following sequence of commands: [131067050030] |When I go back in history by pressing up twice, I end up with cd, instead of ls. [131067050040] |I've noticed this isn't always the case, so it likely is some bash setting somewhere. [131067050050] |Which is it? [131067060010] |HISCONTROL=ignoredups [131067060020] |http://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html#Bash-Variables [131067070010] |The is controlled by the HISTCONTROL variable. [131067070020] |If is contains ignoredups, then duplicate commands will not be saved in the history. [131067070030] |Without this string, they will be saved. [131067070040] |This would be normally set in .profile or .bashrc, using something like [131067070050] |Other values that can be used in HISTCONTROL (multiple can be used, separated by :) [131067070060] |
  • ignorespace any commands starting with a space are not added to the history
  • [131067070070] |
  • erasedups duplicate commands are erased from the history, so the history only contains the latest command
  • [131067070080] |
  • ignoreboth identical to ignoredups:ignorespace
  • [131067080010] |Force directory to always be in cache [131067080020] |I've been testing out different methods to improve the time it takes to compile my entire c++ project. [131067080030] |Currently it takes ~5 minutes. [131067080040] |I experimented with distcc, ccache, and others. [131067080050] |Recently, I discovered that if I copy my entire project onto a RAM-drive, and then compile from there, it cuts the compile time down to 30% of its original-- just 1.5 minutes. [131067080060] |Obviously, working from the RAM drive isn't practical. [131067080070] |So, does anyone know of a way I can force the OS to always keep a certain directory cached? [131067080080] |I still want the directory to get synced back to disk like normal, but I always want a copy of the data in memory as well. [131067080090] |Is this possible? [131067080100] |EDIT: As a possible solution, we just thought of launching a daemon that runs rsync every 10 seconds or so to sync the disk drive with a RAM drive. [131067080110] |Then we run the compilation from the RAM drive. [131067080120] |The rsync is blazing fast, but would this really work? [131067080130] |Surely the OS could do better.... [131067090010] |Forcing cache isn't the right way to do this. [131067090020] |Better to keep sources on hard drive and compile them on tmpfs. [131067090030] |Many build systems, such as qmake and CMake, supports out-of-source builds. [131067100010] |Linux by default use the RAM as disk cache. [131067100020] |As a demonstration, try to run time find /some/dir/containing/a/lot/of/files >/dev/null two times, the second time is a lot faster as every disk inodes are cached. [131067100030] |The point here is how to make use of this kernel feature and stop your attempt to replace it. [131067100040] |The point is to change the swappiness. [131067100050] |Let's consider three main types of memory use: active programs, inactive programs and disk cache. [131067100060] |Obviously memory used by active programs should not be swapped out and the choice between to two others is quite arbitrary. [131067100070] |Would you like fast program switching or fast file access? [131067100080] |A low swappiness prefers to keep programs in memory (even if not used for long time) and a high swappiness prefers to keep more disk cache (by swapping unused programs). (swappiness scale is from 0 to 100 and the default value is 60) [131067100090] |My solution to your problem is to change the swappiness to very high (90-95 not to say 100) and to load the cache: [131067100100] |As you guess it, you must have enough free memory to hold in cache all your source files and object files as well as the compiler, included headers files, linked libraries, your IDE and other used programs. [131067110010] |Given sufficient memory your build out of the ramdisk does no I/O. This can speed up anything that reads or writes files. [131067110020] |I/O is one of the slowest operations. [131067110030] |Even if you get everything cached before the build you still have the I/Os for write, although they should have minimal impact. [131067110040] |You may get some speedup by pre-loading all the files into cache, but the time taken to to that should be included in the total build times. [131067110050] |This may not give you much advantage. [131067110060] |Building the object and intermediate files into RAM rather than disk. [131067110070] |Doing incremental builds may get you significant gains on frequent builds. [131067110080] |On most projects I do a daily clean build and incremental builds in between. [131067110090] |Integration builds are always clean builds, but I try to limit them to less than one per day. [131067110100] |You may gain some performance by using an ext2 partition with atime turned off. [131067110110] |Your source should be in version control on a journaled file system like ext3/4. [131067120010] |The obvious way to keep a bunch of files in the cache is to access them often. [131067120020] |Linux is pretty good at arbitrating between swapping and caching, so I suspect that the speed difference you observe is actually not due to the OS not keeping things in the cache, but to some other difference between your usage of tmpfs and your other attempts. [131067120030] |Try observing what is doing IO in each case. [131067120040] |The basic tool for that is iotop. [131067120050] |Other tools may be useful; see Linux disk IO load breakdown, by filesystem path and/or process?, What program in Linux can measure I/O over time?, and other threads at Server Fault. [131067120060] |Here are a few hypotheses as to what could be happening. [131067120070] |If you take measurements, please show them so that we can confirm or disprove these hypotheses. [131067120080] |
  • If you have file access times turned on, the OS may waste quite a bit of time writing these access times. [131067120090] |Access times are useless for a compilation tree, so make sure they're turned off with the noatime mount option. [131067120100] |Your tmpfs+rsync solution never reads from the hard disk, so it never has to spend extra time writing atimes.
  • [131067120110] |
  • If the writes are synching, either because the compiler calls sync() or because the kernel frequently flushes its output buffers, the writes will take longer to a hard disk than to tmpfs.
  • [131067130010] |The inosync daemon sounds like it does exactly what you want if you're going to rsync to a ramdisk. [131067130020] |Instead of rsyncing every 10 seconds or so, it uses Linux's inotify facility to rsync when a file changes. [131067130030] |I found it in the Debian repository as the inosync package, or its source is available at http://bb.xnull.de/projects/inosync/. [131067140010] |Cron with a 12-hour issue? [131067140020] |I have a cron job that's managed through plesk. [131067140030] |I'm being told that it's being executed twice in a day, once at 5AM and once at 5PM, based on the email that's being received. [131067140040] |Here's the cron line: [131067140050] |So obviously there's a daylight-savings issue; that's not a problem. [131067140060] |But why is it running twice? [131067140070] |I thought that the hour specifier was a 24-hour denomination. [131067140080] |But it seems to be running at both 4am and 4pm server time. [131067140090] |Please note that I'm managing the cron job though a Plesk web admin panel. [131067140100] |I did some googling, but I couldn't find anything about Plesk cron bugs or issues. [131067140110] |What is going on here? [131067150010] |I don't see anything wrong with your cron job as you describe it. [131067150020] |I'd suggest looking at the syslogs of that system to see when cron runs that job. [131067150030] |There should be a log entry every time it runs. [131067160010] |I see 2 possibilities: [131067160020] |
  • You're running very-very strange crond
  • [131067160030] |
  • There is other cronjob scheduled to 4pm (check global /etc/crontab and crontabs of other users, or any directories with periodical jobs, like /etc/periodic in FreeBSD)
  • [131067170010] |Comment out that line in your cron and see if the job is still getting executed. [131067170020] |Then you'll understand that it is scheduled by someone else too. [131067180010] |How do I zip/unzip on the unix command line? [131067180020] |How can I create and extract zip archives from the command line? [131067190010] |You can zip files up with: [131067190020] |which will do the current directory. [131067190030] |Replace . with other file names if you want something else. [131067190040] |To unzip that file, use: [131067190050] |That's assuming of course that you have a tar capable of doing the compression as well as combining of files into one. [131067190060] |If not, you can just use tar cvf followed by gzip (again, if avaialable) for compression and gunzip followed by tar xvf. [131067200010] |here (for anyone wondering) are the meaning of the flags in his commands [131067200020] |c Create a new archive. t List the contents of an archive. x Extract the contents of an archive. f The archive file name is given on the command line (required whenever the tar output is going to a file) M The archive can span multiple floppies. v Print verbose output (list file names as they are processed). u Add files to the archive if they are newer than the copy in the tar file. z Compress or decompress files automatically. [131067200030] |source: Tar - Linux Commands [131067210010] |Typically one uses tar to create an uncompressed archive and either gzip or bzip2 to compress that archive. [131067210020] |The corresponding gunzip and bunzip2 commands can be used to uncompress said archive, or you can just use flags on the tar command to perform the uncompression. [131067210030] |If you are referring specifically to the zip file format, you can simply use the zip and unzip commands. [131067210040] |To compress: [131067210050] |zip squash.zip file1 file2 file3 [131067210060] |To uncompress: [131067210070] |unzip squash.zip [131067220010] |The most standard answer is pax, which is recommended over cpio and tar. [131067220020] |Unlike cpio and Posix tar, but like GNU tar, pax is able to both archive files and compress the archive. [131067220030] |This behaviour is different from zip, which compresses each file before putting it in the archive. [131067230010] |and [131067230020] |You'll need to make sure these commands are installed via your package manager. [131067230030] |It's no harder than using anything else on the command line. [131067230040] |It's certainly simpler than creating archives with tar. [131067240010] |Well, when it comes to distributing files for a variety of operating systems, I'd recommend 7-zip. [131067240020] |Usually in the package p7zip, you'll get the 7z and 7za command, with which you can create your own 7z archives. [131067240030] |7za can also decompress standard (pkzip) zip archives (and create them as well with the -tzip switch). [131067240040] |Compressing: [131067240050] |Decompressing: [131067240060] |It can also create self-extracting archives with the -sfx switch: [131067240070] |I recommend this method in case Windows users can't open 7z archives (in case you want to advice a tool for that: PeaZip). [131067240080] |If you want to use the same compression algorithm with your tarballs, use the -J switch with tar: [131067240090] |xz is a UNIX tool, that uses LZMA2 for compression, but works the way gz, bz2, etc works. [131067240100] |It even works as a filter. [131067240110] |7z doesn't create archives with full filesystem information on UNIX, so you'd need to use tar before using 7z (but since 7z stores other information about the tar file, I'd recommend using xz, as it is designed for it): [131067250010] |pimp my GNU grep [131067250020] |I have seen on some Linux, grep is configured to highlight the match, and print the matching file. [131067250030] |How do you configure GNU grep best? [131067260010] |Usage: [131067260020] |Also one of my favorites: [131067260030] |will list all pids of processes that match the name of some-hanging-process which you can use in following situation: [131067270010] |I've found that the best way to pimp grep is to use ack, which is essentially recursive grep with an intelligent ignore list (e.g., doesn't search .svn directories, ignores backup files, etc.), colour highlighting of results and perl regexps. [131067270020] |It's what you want grep to do 98.6% of the time. [131067280010] |I set this in my .bashrc , instead of redefining grep using an alias: [131067280020] |For me, this works on Linux, MacOSX &FreeBSD. [131067290010] |I use this function all the time: [131067290020] |as well as some emacs code that uses either git ls-files or hg manifest and xargs(1) to pass the list of files to grep directly so it doesn't have to walk the tree at all. [131067290030] |Piping the VCS's list of files into xargs is blindingly fast. [131067300010] |The --color option has been already mentioned several times, but I'd like to add that it's possible to configure the color in which the matches will be highlighted using an environment variable [131067300020] |The color should be encoded using ANSI color codes, for reference [131067310010] |SPEC %files attribute and Shell variables [131067310020] |I have a spec file which unpacks a library which is deployed at location which is exported in the shell. [131067310030] |ie [131067310040] |This fails with : [131067310050] |I.e the shell variable does not get expanded. [131067310060] |Is there any way around this ? [131067310070] |Thanks! [131067320010] |What happens if you do not enclosure it in { }, and only use " " [131067320020] |(or even without " " at all) [131067330010] |Unfortunately, anything defined in the shell started by the %prep, %build or %install sections isn't preserved in the build environment. [131067330020] |You'd need to define %{AXIS2_C}, a MACRO variable (not a shell variable): [131067330030] |and then refer to it in both your shells as [131067330040] |and then in the %files section, use [131067330050] |Usually, the initial %define is at the top of the spec file, with some documentation about what it is for. [131067330060] |If you need to dynamically set the macro, you'll have to use more complex RPM spec macro commands like %() to do shell expansions. [131067340010] |udev rules don't appear to be working [131067340020] |I'm running Arch Linux on my server, and I need to let users of the group usb access my weather station. [131067340030] |Here's my rule: /etc/udev/rules.d/usb-70.rules [131067340040] |Users in the usb group still can't see the device (permission denied). [131067340050] |The Vendor and Product ID is confirmed correct, and I've rebooted 50 million times to no avail. [131067340060] |Anyone have any ideas? [131067350010] |Can you add SUBSYSTEM=="usb" to the beginning of that rule? [131067350020] |If the version of udev is old enough (no idea what Arch uses, sorry), it might be BUS=="usb", instead. [131067350030] |What are the permissions on the device? [131067350040] |If none of that helps, can you show us the "udevadm info" output for that device? [131067360010] |How to run part of a script with reduced privileges? [131067360020] |I have the following problem: On every machine running Postgresql there is a special user postgres. [131067360030] |This user has administrative access to the database server. [131067360040] |Now I want to write a Bash script that executes a database command with psql as user postgres (psql shall execute as user postgres, not the script). [131067360050] |So far, that wouldn't be a problem: I could just run the script as user postgres. [131067360060] |However, I want to write the output of psql to a file in a directory where postgres has no write access. [131067360070] |How can I do that? [131067360080] |I thought about changing EUIDs in the script itself, however: [131067360090] |
  • I couldn't find a way to change the EUID in a Bash script
  • [131067360100] |
  • How can I change the EUID when using something like psql -U postgres -c "" >file?
  • [131067370010] |You can run the shell script with user has better write permission (such as root), and when you output the data should write into a folder that database user postgres can write to (such as /tmp) [131067370020] |after data write finish, move it to the directory that your shell script has permission to write (like root user can write in anywhere) [131067380010] |You may want to use this trick: [131067380020] |tee(1) is POSIX utility, so you may rely on its availability. [131067390010] |If you are coming up with tricky ways to circumvent security restrictions, you better be asking yourself if your objective is really wise. [131067390020] |I know nothing about postgresql - do you really need to be logged in with the admin account to do what you're trying to do, or is there some way you can grant read-only permissions for whatever it is to a normal user account? [131067400010] |Why don't you just do it like this: sudo su postgres -c "psql ..." >/path/to/file? [131067410010] |Use a subshell: (su -c 'psql -U postgres -c ""' postgres) >file [131067410020] |Inside the subshell you can drop permissions to do your work, but output is redirected to your original shell which still has your original permissions. [131067420010] |Can I do a "test run" with rsnapshot? [131067420020] |I occasionally make changes to my rsnapshot.conf and I'm wondering if there's any way I can do a test run that is sync-ed to a location other than the normal flow... something that's not an interval. [131067420030] |Is this possible? how? [131067430010] |I don't have an rsnapshot setup to test this on. [131067430020] |Be careful. [131067430030] |Personally, I think that the best thing to do is to carefully evaluate the output of rsnapshot -t interval. [131067430040] |However if you want to actually move files, one way to do it might be to create an alternate config file that is identical to your real config file but with a different snapshot_root such as: [131067430050] |And then you can run your test using [131067430060] |where interval0 is your lowest order interval. [131067440010] |How to visualize time-series data? [131067440020] |I have some time-series data I want to visualize as 2d plot. [131067440030] |The input is ISO-date-format and value separated by space, one record per line: [131067440040] |The output should be a nice looking 2d plot. [131067440050] |Basic requirements: [131067440060] |
  • output to a X11 window (as preview) and to a png file
  • [131067440070] |
  • x-axis has to understand the dates and naturally scale the data, e.g. a gap between 3 days should be three times as long as a gap between 1 day
  • [131067440080] |
  • should be callable from a script
  • [131067440090] |
  • nice output and convenient to use
  • [131067440100] |Bonus: [131067440110] |
  • svg output
  • [131067440120] |I tried gnuplot and it works - it has some date support: [131067440130] |But I have some problems with gnuplot: [131067440140] |
  • with default settings plots look very ugly
  • [131067440150] |
  • it is difficult to find stuff in the manual - e.g. when plotting points how do I use small filled circles instead of the default '+' sign?
  • [131067440160] |
  • the gnuplot shell is a pain in the neck - it does not use readline, it is broken, command completion is a joke, I don't know how to enable vi shortcuts - or if they are supported, what about reverse search etc.
  • [131067440170] |Thus my question: What are the alternatives for visualizing time-series data? [131067440180] |Or am I overstating the gnuplot issues? [131067450010] |R is better at this sort of thing because: [131067450020] |
  • It's a complete programming environment, with C and Fortran-compatible extension APIs, so there is literally nothing you can't make it do.
  • [131067450030] |
  • Many have already contributed their solutions to common problems to the CRAN: Comprehensive R Archive Network.
  • [131067450040] |
  • There are many books on time series analysis and R in general.
  • [131067450050] |R has everything you asked for: [131067450060] |
  • Outputs to X11, PNG, or (with an add-on) SVG
  • [131067450070] |
  • Filled circles for plot points: pass pch=19 or pch=20 to par() or points(). [131067450080] |There are many other plot point symbols predefined, plus all of Unicode if you're using a font with Unicode support.
  • [131067450090] |
  • Time-aware charting: if the built-in ones don't have the scaling you want, you can build anything you need with R's plotting primitives
  • [131067450100] |
  • Callable from a script: use a #!/usr/bin/Rscript shebang line on your R program file
  • [131067450110] |
  • Nice and convenient: There are GUI frontends, if you like, and if you don't like, the default command-driven environment has a lot of nice features, like the ability to see the R source code of many builtin operations, which helps to learn how the system is put together. [131067450120] |(Yes, much of R is written in R!)
  • [131067450130] |
  • Pretty plots: Antialiasing is the default if R is built against Cairo, which it will be if it's a recent build on Linux. [131067450140] |Old versions of R may not have AA built in. [131067450150] |For an idea of the capability of R if you put a bit of time into it, check this out: [131067450160] |(Click image for article describing it.)
  • [131067450170] |Regarding the gnuplot command line, you can build it to support GNU readline, BSD libedit, or as a fall-back, a custom built-in command line editing scheme. [131067450180] |(This according to p.20 of the manual.) [131067450190] |I have gnuplot 3.7 on one machine and 4.0 on another, and they're both built with readline. [131067450200] |Perhaps you have a special masochist's/minimalist's build? :) [131067460010] |RRDTool's whole purpose of existence is plotting time series data, but it's primarily meant for automated graphing and may not be the best fit for your needs. [131067460020] |That said: [131067460030] |
  • It can output in either PNG or SVG, but has no preview functionality.
  • [131067460040] |
  • Time-scaling is built in.
  • [131067460050] |
  • Easily scripted (command line access or libraries in many scripting languages).
  • [131067460060] |
  • Output can be made to look pretty decent .
  • [131067470010] |How to edit command line in full screen editor in ZSH? [131067470020] |In bash, using vi mode, if I hit Esc,v, my current command line is opened in the editor specified by $EDITOR and I am able to edit it in full screen before 'saving' the command to be returned to the shell and executed. [131067470030] |How can I achieve similar behaviour in zsh? [131067470040] |Hitting v in command mode results in a bell an has no apparent effect, despite the EDITOR environment variable being set. [131067480010] |See edit-command-line in zshcontrib. [131067490010] |You can use fc to edit the last command in history. [131067490020] |It's not the same as editing the same command, but a quick hit on the Enter key makes your current command the last command in history. [131067500010] |How do I associate applications with KDE Activities? [131067500020] |How do I associate applications with KDE Activities? [131067500030] |It doesn't seem obvious how it works. [131067500040] |Are there any tricks? [131067510010] |It doesn't seem to be possible yet. [131067510020] |If it will appear it should appear in the window/application configuration (from the advanced section in the window context menu). [131067510030] |Activities are still a very fresh feature. [131067520010] |Ok first you want to open your activities (SUPER ( Windows )+Q) unlock widgets, and create at least one other activity. [131067520020] |Make sure that more than one of the activities are not stopped (e.g. red X). [131067520030] |*(note: Remember the activity that is highlighted is the currently active one, and according to aseigo only one can be active at a time, though I haven't found this to be exactly true.) [131067520040] |Now right click on the title bar of the window you want to associate with an activity. [131067520050] |Go to Activities, and select the activity you want it to be associated with. [131067520060] |Please note this dialog is only present if the there are other activites in "not stopped state:" if you stop all but one it won't show the activities dialog. [131067520070] |Stopped activities are not shown in this dialog. [131067520080] |note: only works in 4.6 (or later? activities have changed much over kde 4's lifetime, I actually don't know if they'll work this way in 4.7, I honestly hope they don't, this is not intuitive) [131067530010] |BURG menu error [131067530020] |I installed Ubuntu 10.10 recently and installed BURG (bootloader based on GRUB). [131067530030] |However, the boot menu seems to have swapped Vista and Windows Recovery mode; so to go to Vista I need to select the 'recovery mode'. [131067530040] |How can I fix this? [131067540010] |Partition manager that can handle LVM? [131067540020] |I have been looking but neither Gparted nor KDE Partition Manager can handle LVM. [131067540030] |Working with the command line is probably fine but it would be clearer to have a GUI tool here. [131067540040] |Does such thing exist? [131067550010] |In RedHat's set of administration tools, there's system-config-lvm, which is optionally installable in other distributions like Fedora and Debian. [131067550020] |Recent versions of gnome-disk-utility support LVM. [131067550030] |The newly-released KDE 4.6 gains udisks as a Solid backend, which should provide LVM support. [131067550040] |(Out of the three, this is the only one I haven't tried.) [131067560010] |What customizations have you done on your shell profile to increase productivity? [131067560020] |I know some people have some startup scripts and some people personalise the prompt. [131067560030] |One developer uses short aliases for the long path he often visits and the frequent commands he runs. [131067560040] |What are all the effective customization you have done on your UNIX profile to increase productivity and ease of use? [131067570010] |
  • bashrc: I'm a zsh user, so I have a few lines in my bashrc that start zsh if it is available on a system.
  • [131067570020] |
  • zshrc: Instead of copying my zshrc from something like grml (though there zshrc is pretty good, so if you don't want to roll your own theirs is probably one of the best) I write my own zshrc. [131067570030] |
  • I have a customized prompt. [131067570040] |Among some other things it shows the return code of the last command if it was unequal to 0.
  • [131067570050] |
  • I have some aliases. [131067570060] |Because I have accounts on quite a number of servers I sometimes have to perform checks for which version of a command is available on a system and set the alias accordingly.
  • [131067570070] |
  • I set my PATH variable.
  • [131067570080] |
  • I set some other environment variables (for example $EDITOR)
  • [131067570090] |
  • vimrc: I'm a vim user, so I have a customized and a customized color scheme.
  • [131067570100] |
  • screenrc: I use GNU screen to avoid having to open multiple terminals and to preserve history while not being logged in, so I have my own screenrc.
  • [131067580010] |

    .vimrc

    [131067580020] |save file with root permissions by typing w!!: [131067580030] |

    .bashrc

    [131067580040] |Don't bother with devices or binary files when greping: [131067580050] |Share code on the web (like pastebin, but simpler) by cat 1337.sh | webshare [131067580060] |It gives back a short url in your clipboard; you can append ?whatever-lang to the returned URL to have it syntax highlighted and lines numbered. [131067580070] |

    .inputrc

    [131067580080] |Use vi mode in everything that uses the readline library (many programs): [131067590010] |If you can TURN ON AUTOCOMPLETE AND FILE NAME SPELLING CORRECTION! [131067590020] |That's probably the two things that will save you the most time. [131067590030] |Then, learn to use them - Bash and Zsh have tab-completion. [131067590040] |Ksh has an inefficient escape-backslash, so I'd recommend against Ksh. [131067590050] |I use Zsh, but aliases like this would work in almost any shell except Csh: [131067590060] |It seems like an alias for 'ps' should be in there, but I find myself using 'ps' in a wide variety of ways, and I haven't found anything so far. [131067590070] |In Zsh, set up your RPROMPT (not a typo!) variable: [131067590080] |The entire directory appears on the right side of the command line, ready for cutting-n-pasting. [131067590090] |More on that later. [131067590100] |You should use a properly compiled modern Vim, because of the ability to have multiple vim-windows into a file, and multiple buffers. [131067590110] |Your .vimrc could have things like this in it: [131067590120] |A lot of those are personal preference, but I do happen to believe that 8-space tabs make code less readable, and there's a study floating around to prove it. [131067590130] |Also, the "mouse=c" is important. [131067590140] |You shouldn't be using your mouse to move around inside a file. [131067590150] |Taking your hands off the keyboard, touching the mouse and them moving them back is slow. [131067590160] |Use "hjkl" cursor movement, and other keyboard paging and cursor movement keys. [131067590170] |If you're using X11, you should do a few things to your Xterm configuration. [131067590180] |This comes out of my .Xresources file: [131067590190] |Give Xterm a scrollbar by default, save 1000 lines of text in the buffer, that's pretty standard. [131067590200] |The charClass directive makes a "word" include things like '.', '/' and '*'. [131067590210] |Double clicking on any part of a '/'-separated file name, and you get the whole thing, less ':' characters. [131067590220] |cutToBeginningOfLine works with the Zsh RPROMPT above. [131067590230] |Triple click on the path of the current working directory that appears on the RHS of your command line, and you pick up only the path: the copy stops at the beginning of the word. [131067590240] |Highly efficient once you're used to it. [131067590250] |The above X resources also makes the into a paste key. [131067590260] |That way, once you've copyed (probably using the mouse) you can paste without moving your hand back to the mouse to click. [131067600010] |Adding the non-zero return value of the last command is a great idea. [131067600020] |I think the original poster was specifically asking about .profile/.cshrc/.bashrc. [131067600030] |It's worth mentioning the list of other commonly customized RC files, but I would stick to just shell customizations for this question. [131067600040] |I also recently added a flag in my prompt that shows up when the shell is running under screen. [131067600050] |It uses the solaris "ptree" command to search ancestor processes, but you could use the "pstree" command on Linux to do the same thing. [131067600060] |It took me a few minutes to figure out how to embed the return code of the last command, so I'll post it here. [131067600070] |I'm sure that could be made more beautiful. :-) [131067600080] |Future tip, be careful about reading $? after using "if [". [131067600090] |If the left bracket is a built-in it will not override the value of $?. [131067600100] |But if you use a shell where [ is not built-in, then it will reset the value of $? after testing. [131067600110] |It's safer to assign $? into a temporary variable right away and then test that variable. [131067610010] |.zshrc: [131067610020] |.xmodmaprc: [131067610030] |(Swaps Escape and Caps Lock keys). [131067620010] |I mess up with my bashrc since i use terminal a lot (it makes me learn fast and learn interesting stuff to use as well as interesting tools). [131067620020] |I ususally define a lots of functions in my bashrc. [131067620030] |Examples: [131067620040] |Extract archives: [131067620050] |rename files and folders: [131067620060] |and like this for spliting large files into several small ones: [131067620070] |Also i edited a lot of aliases since i find that it's far more easier to use one command with arguments as default in some cases (like in ls, grep and small commands) then to type all that down every time. [131067630010] |(Community wiki, so each trick belongs in a separate answer.) [131067630020] |safe logout [131067630030] |Ctrl+D is the easiest way to exit the shell, but if you still have jobs running, it will happily exit the shell anyway. [131067630040] |By default, this means all the programs you were running from inside that shell will be killed. [131067630050] |Some shells will only let you log out after pressing Ctrl+D twice, but it's still too easy to do that accidentally. [131067630060] |So instead, add this to .bashrc or .zshrc or whichever config file you prefer. [131067640010] |(Community wiki, so each trick belongs in a separate answer.) [131067640020] |search your history for all the ways you ran a command [131067640030] |You might already know about Ctrl+R, but this way is much smoother IMHO. [131067640040] |Set up Alt+P to search history for commands that start with what you already typed. [131067640050] |e.g. ls Alt+P, Alt+P , Alt+P will search backwards thru all your ls commands. [131067640060] |You need to put this in your /etc/inputrc or .inputrc for bash: [131067640070] |and this in your .zshrc for zsh: [131067640080] |You could even go one step further and make the Up arrow do this. [131067650010] |better tab completion [131067650020] |I don't think anyone mentioned customizing Tab completion yet. [131067650030] |Here's what I have. [131067650040] |The two main things it does are: [131067650050] |
  • each command will tab complete depending on what the command is expecting e.g. cd will only suggest directories
  • [131067650060] |
  • ignore case e.g. d will still complete Desktop and Downloads
  • [131067650070] |For bash: [131067650080] |For zsh: [131067660010] |simple calculator [131067660020] |You can use $(( ... )) or expr ... to do very basic calculations, but it does integer division, e.g. [131067660030] |A better way is to use bc. [131067660040] |then: [131067670010] |show the most recently changed file [131067670020] |Often, I want to look at the most recent file. e.g., I might be in the logs directory, and want to see which file is most recent because that is the first place to look to see why something is not working. [131067670030] |ls -lt | head is a but cumbersome to type, so here's an alternative: [131067670040] |It also takes a wildcard or list of files, e.g. [131067670050] |which is especially handy if all your log files have a timestamp in their name. [131067670060] |You can find the latest log for that program, without worrying what format the timestamp is in. [131067680010] |make a directory and cd in one command [131067680020] |Most of the time I do mkdir, my next command is cd . [131067680030] |This saves some typing: [131067680040] |for example: [131067680050] |Another thing I find useful is an easy way to make a throwaway directory. e.g. if I'm compiling a program or even if I'm trying to reproduce a problem on this site. [131067680060] |Sometimes I might forget to clean up the directory. [131067680070] |showing it working: [131067680080] |I am relying on the system cleaning up /tmp after a reboot, but it would be possible to enhance this, e.g. make it delete the temp dir after exiting the shell. [131067690010] |I like to have my bash prompt show the exit code of the previous command if it was non-zero. [131067690020] |I also like my shell to cheer me up when I use it, so I added a bit of silliness: [131067690030] |so when I run commands, I get some nice visual feedback: [131067690040] |edit: this is something I put in my ~/.bashrc [131067700010] |Safe compression [131067700020] |Compression programs delete the original file by default. [131067700030] |I don't like that. [131067700040] |Multi-line prompt [131067700050] |
  • Shows the current directory in a separate line. [131067700060] |Useful when handling a deep directory tree on a 80 column terminal.
  • [131067700070] |
  • Having a clock in the corner is a great thing if you use a graphical environment. [131067700080] |This prompt shows the time. [131067700090] |Unfortunately you have to press enter to update it.
  • [131067700100] |
  • You can display "tags" with environment variables. [131067700110] |Example: [131067700120] |
  • The code is at least partially based on this.
  • [131067700130] |History settings [131067700140] |
  • Shamelessly stolen from here.
  • [131067700150] |
  • I added support for explicitly disabling logging. [131067700160] |Useful if you are dealing with programs that expect passwords as a CLI argument.