[131021570010] |how to ensure a program is always running but without root access? [131021570020] |Currently I need to have a program running all the time, but when the server is rebooted I need to manually run the program. [131021570030] |And sometimes I'm not available when that happens. [131021570040] |I can't use a normal configuration to restart my program when the server is starting because I don't have root access and the administrator don't want to install it. [131021580010] |I posted this on a similar question [131021580020] |If you have a cron daemon, one of the predefined cron time hooks is @reboot, which naturally runs when the system starts. [131021580030] |Run crontab -e to edit your crontab file, and add a line: [131021580040] |I'm told this isn't defined for all cron daemons, so you'll have to check to see if it works on your particular one [131021590010] |A more general solution would be to set up a cronjob which checks if your program is running every few minutes. [131021590020] |I run dircproxy as regular user and the crontab entry looks like: [131021590030] |*/10 * * * * /path/to/dircproxy_cron.sh [131021600010] |Even more reliable solution would be to use a monitoring tool like Monit or god that automatically detects that a process is dead and restarts it whether it happened during reboot or not. [131021600020] |You might think that's an overkill but actually the configuration of Monit is a simple as [131021600030] |Of course the assumption is that the administrator is willing to install the monitoring tool for you so it could be a chicken and egg problem :) [131021610010] |I would recommend http://supervisord.org/ it will supervice your process, restart it for you and has a lot of configuration and should be run as a normal user. [131021620010] |This is not an answer to the general question, but should probably be mentioned. [131021620020] |Unix and Unix-like systems are often employed in multi-user environments since they are multi-user operating systems. [131021620030] |Because of this, administrators of such systems are often hesitant to install unknown services or software because it may lead to instability or insecurity for all of the users of the system. [131021620040] |Thus, I think the first steps are [131021620050] |
  • Determine why the administrator doesn't want the service installed.
  • [131021620060] |
  • Think about whether you can accomplish the task in some way that would address the administrator's concerns.
  • [131021620070] |
  • If not, determine whether the administrator's concerns are valid. [131021620080] |Namely, ask yourself, "Could this cause instability or other users?", "Could this affect performance for other users?", and "Could this leave other users open to security vulnerabilities?"
  • [131021620090] |
  • If the answer to all of those are no, ask yourself whether doing this without the administrators permission could get you into trouble.
  • [131021620100] |
  • If it won't or if you are willing to accept the risk, try one of the methods mentioned above.
  • [131021630010] |bash: recall command by typing some characters of it [131021630020] |Possible Duplicate: best way to search my shell's history [131021630030] |In MSDOS I have the option to start typing some characters of a previous command and then pressing F8 it will search the command history buffer for the first occurrence of these characters. [131021630040] |Repeatedly pressing F8 will continue the search for the next match. [131021630050] |I love this functionality! [131021630060] |Does it exist on Linux/UNIX? [131021640010] |Linux for low-end hardware and internet browsing [131021640020] |I'm going to host an event where we have ~15 computers with low-end hardware. [131021640030] |I think the computers got 256 MBs of RAM, 5 GB of storage and a 300 MHz Intel CPU. [131021640040] |We've been running DSL on the machines, but since we are only going to use them to browse the web (possibly using Chrome), we'd like to look into as many options as possible. [131021640050] |Does anyone have experience with something like this? [131021650010] |DSL would be your best bet, but you might try out a minimal Arch installation. [131021650020] |Since with arch you build up the system from base. [131021650030] |Arch provides a minimal environment upon installation, (no GUI), compiled for i686/x86-64 architectures. [131021650040] |Arch is lightweight, flexible, simple and aims to be very UNIX-like. [131021650050] |Its design philosophy and implementation make it easy to extend and mold into whatever kind of system you're building- from a minimalist console machine to the most grandiose and feature rich desktop environments available. [131021650060] |Rather than tearing out unneeded and unwanted packages, Arch offers the power user the ability to build up from a minimal foundation without any defaults chosen for them. [131021650070] |It is the user who decides what Arch Linux will be [131021650080] |- Arch Linux Wiki [131021650090] |I have arch installed on a 500MB RAM, 2GB Storage and 500 MHZ Intel CPU. [131021650100] |A bith tight on storage, but otherwise perfect. [131021650110] |EDIT: Note that arch only works on i686 and x86-64 base systems [131021650120] |Otherwise, I have heard good things about SLAX [131021660010] |I have had great success using Puppy Linux on older hardware, and as Stefan mentioned SLAX is another good one. [131021660020] |The last box I ran Puppy 5.1 on had a: [131021660030] |
  • Pentium 3 450Mhz processor
  • [131021660040] |
  • 256 MB of Ram
  • [131021670010] |I would suggest a Gentoo installation with distributed compilation, X11, Firefox (or chrome), and E17. [131021670020] |Should be really fast once installed and compiled. [131021670030] |Also, you could pre-compile things in a chrooted environment on faster hardware, and the redeploy the binaries. [131021680010] |How can I move the home directory to a separate partition? [131021680020] |I would like to install another distribution but keep my home directory. [131021680030] |Is there a way to move the home directory to a separate partition? [131021680040] |I don't have an external hard drive available to back up my data. [131021680050] |I would like to set up my partitions as suggested here. [131021690010] |Assuming you have a separate partition already (and if you don't, you probably want to use gparted or something similar, to make one), you simply do the following (preferably as the root user): [131021690020] |And then assuming that everything works the way it should (log in as the user and test) [131021690030] |This copies the entire directory (and all the contents) to the destination mountpoint, renames the original home directory (in case something is screwed up in the process, then we can recover), creates a symbolic link from the new location to the original home directory (which everything (like /etc/passwd) is still pointing to), then assuming it worked, removes the backup copy we made, leaving the copy we put at the destination filesystem. [131021700010] |Hello vanillaike, [131021700020] |The title of the post and your question caused some confusion to me. [131021700030] |Do you want to separate your home into a partition, or do you just want to reinstall and keep the same home? [131021700040] |If all that you want is to reinstall the whole OS while keeping your home then you can backup your home into a place that will not be affected by the install, then restore it after that, together with a permission fix. [131021700050] |If you want to follow some best practices and separate your home then here is the guide you need. [131021700060] |It's written for Ubuntu, but I think the same thing goes for other distros. [131021710010] |This question is distro-agnostic, so If mention anything specific that you don't have, just use the equivalent on your side. [131021710020] |I really recommend you buy an external for backups, trust me, losing your data is the worst. [131021710030] |Proceed at your own risk - But if you can't get one, here's what you can do. [131021710040] |What you need [131021710050] |
  • the size of your /home directory
  • [131021710060] |
  • free space, more than the size of your /home directory
  • [131021710070] |
  • disk partitioning tool, I recommend gparted
  • [131021710080] |What to do [131021710090] |
  • Check the size of your /home directory (the last result will be home total): [131021710100] |du -h /home
  • [131021710110] |
  • Check if you have enough free space for the new partition: [131021710120] |df -h
  • [131021710130] |
  • Install gparted [131021710140] |sudo apt-get install gparted
  • [131021710150] |You need more free space than the size of your /home directory. [131021710160] |If you don't have the free space, then you won't be able to create that new partition, and need to move your data onto an external anyway. [131021710170] |If you have the space, use gparted to shrink your existing partition, and then create a new partition with the freed unallocated space. [131021710180] |Once your new partition is ready, note it's /dev/sdax (use sudo fdisk -l to see this), and copy your /home files to it. [131021710190] |Using the partition in a new distro [131021710200] |You mentioned installing another distro, if you plan to override your current distro, then during installation you should be asked to setup partitions. [131021710210] |At that point you can specify this partition as /home, choose not to format it, and all will be well, you can skip this next section. [131021710220] |If however you want your current distro to work with the new /home partition, follow this section: [131021710230] |Mount the partition in an existing distro [131021710240] |We have to tell your OS to use the partition as your new /home, we do this in fstab, but first let us find the UUID of this new partition: [131021710250] |Cross reference your new partition's /sdax and copy the UUID of it, mine looks like 3d866059-4b4c-4c71-a69c-213f0e4fbf32. [131021710260] |Backup fstab: sudo cp /etc/fstab /etc/fstab.bak Edit fstab: sudoedit /etc/fstab [131021710270] |The idea is to add a new line that mounts the partition at /home. [131021710280] |Use your own UUID, not the one I post here ;) [131021710290] |Save and restart, and test if the new partition mounts to /home. [131021710300] |Run df -h to list all mounted partitions, /home should now be in that list. [131021710310] |Notes [131021710320] |
  • It might be a good idea to familiarize yourself with fstab if you don't know it well. [131021710330] |Just take your time and think about each step.
  • [131021710340] |
  • If you install a new distro, and use the same login name, your old /home files will automatically fall under your ownership.
  • [131021710350] |
  • This is not a trivial topic to cover in one post, but I think I got most of it. :)
  • [131021720010] |Databases for embedded Linux? [131021720020] |Hi, [131021720030] |I need to choose a database for an application on an embedded linux platform. [131021720040] |Any suggestions on which databases are typically used with embedded linux? [131021720050] |Please let me know. [131021720060] |Regards Radha. [131021730010] |SQLite's small size and levels of completeness, stability &speed make it a popular choice for low-resource environments, which embedded systems usually are. [131021730020] |It is used by parts of the current iPhone, Android and Symbian phone operating systems for this reason. [131021730030] |You might want to add some details to your question to get more specific answers: do you know what sort of hardware specification you will be working with for instance? [131021740010] |I Agree - SQLite is a c written engine, easy to install and use. [131021740020] |Recommended. [131021750010] |BDB (libdb) has historically been the embedded database of choice for many applications, shipping with most UNIXes and used by lots of software. [131021750020] |If you're accustomed to SQL relational databases, though, BDB is not one - it is simply a (really good) key-value store. [131021750030] |SQLite is a different popular embedded database. [131021750040] |As the name implies, it is a SQL database engine, and is not in the same category of lean and compact as BDB, but gives you the power to do very different things. [131021750050] |What sorts of requirements are you targeting? [131021760010] |grep - why do brackets in grep pattern remove the grep process from ps results? [131021760020] |Why do brackets in a grep pattern remove the grep process from ps results? [131021770010] |Because the brackets need to be escaped, for bash once and for grep again: [131021780010] |When you run ps -ef | grep string, grep is displayed in the output because string matches [...] grep string. [131021780020] |But, when you run ps -ef | grep [s]tring the line isn't displayed, because grep translates [s]tring to string, while ps outputs [...] grep [s]tring, and that doesn't match string [131021790010] |Enable running mono apps by double click on the .exe [131021790020] |I always wondered if running mono apps will ever be available for Linux by just double clicking on the .exe. [131021790030] |Now in order to have a launcher on Gnome the best way is to add a new bash file which will do the 'mono myapp.exe' for you. [131021790040] |I remember there were some ideas to have that in Linux long time ago, but nothing recently... [131021800010] |Ubuntu have this by default, AFAIK. [131021800020] |For an idea of how this might work, take a look at: binfmt_misc [131021810010] |Where are cron errors logged? [131021810020] |If I setup cron jobs incorrectly they appear to silently fail. [131021810030] |Where should I look for an error log to understand what went wrong? [131021820010] |You should get email from crond when the job either fails to run or when the job returns a nonzero exit code. [131021820020] |Try typing: [131021820030] |at the command prompt. [131021820040] |mailx(1) is the basic mail reading program on most every Unixlike system. [131021820050] |It is very primitive by modern standards, but you can pretty much count on it to always be available. [131021820060] |Other, better mail agents may be available, but there are enough of them that you never know which one is installed on some random machine you happen to be using. [131021820070] |Note that unless you have configured your system as an Internet email server, this mail subsystem is used only within the machine. [131021820080] |You can send email to and receive from other users on the machine, but you may not be able to send email out to the world, and email from the outer world certainly won't be able to come to your machine. [131021830010] |The default cron configuration will send you a mail with the output of your program. [131021830020] |If this fails, you could try wrapping your failing program in a shell script that ensures that the program does not fail, and you could further log the output. [131021830030] |This is a configurable setting on some cron implementations. [131021840010] |Cron logs basic info to /var/log/messages, but mails any program output to the invoking user. [131021850010] |You can always explicitly send the job output to a log file: [131021850020] |Keep in mind that this will supercede the mail behaviour that has been mentioned before, because crond iself won't receive any output from the job. [131021850030] |If you want to keep that behaviour you should look into tee(1). [131021860010] |If you aren't seeing the mails, or if you are spamming root@yourcompany with the errors, try sending the output to Syslog instead: [131021860020] |Then, wait for the cronjob to run and look for the error in /var/log/messages (or /var/log/user.log on some systems). [131021860030] |This works great for errors messages which are only 1-2 lines long, such as "yourcronjob: command not found". [131021860040] |It also makes use of your existing syslog infrastructure (Logrotation, central syslogging, Splunk, etc.) [131021860050] |It also reduces email spam to root. [131021860060] |It may not be a good solution if your cronjob generates hundreds of lines of output. [131021870010] |As others have pointed out, cron will email you the output of any program it runs (if there is any output). [131021870020] |So, if you don't get any output, there are basically three possibilities: [131021870030] |
  • crond could not even start a shell for running the program or sending email
  • [131021870040] |
  • crond had troubles mailing the output, or the mail was lost.
  • [131021870050] |
  • the program did not produce any output (including error messages)
  • [131021870060] |Case 1. is very unlikely, but something should have been written in the cron logs. [131021870070] |Cron has an own reserved syslog facility, so you should have a look into /etc/syslog.conf (or the equivalent file in your distro) to see where messages of facility cron are sent. [131021870080] |Popular destinations include /var/log/cron, /var/log/messages and /var/log/syslog. [131021870090] |In case 2., you should inspect the mailer daemon logs: messages from the Cron daemon usually appear as from root@yourhost. [131021870100] |You can use a MAILTO=... line in the crontab file to have cron send email to a specific address, which should make it easier to grep the mailer daemon logs. [131021870110] |For instance: [131021870120] |In case 3., you can test if the program was actually run by appending another command whose effect you can easily check: for instance, [131021870130] |so you can check if crond has actually run something by looking at the mtime of /tmp/a_command_has_run. [131021880010] |I use vixie-cron, so I don't know if this applies to everything. [131021880020] |But I have a dead.letter file that contains all the output of the job. [131021880030] |In my /root/ folder I have crons.cron which I set as my crontab by running crontab /root/crons.cron. dead.letter will be created in /root/ as well. [131021880040] |Edit I just Google'd dead.letter, and it's an undeliverable mail. [131021880050] |It has nothing to do with cron apparently. [131021880060] |If you don't have mail set up correctly (like me), you'll have the file. [131021890010] |Gconf profiles personialized with environment-variables [131021890020] |What is best practice for creating mandatory / default gconf-profiles using the users environment-variables. [131021890030] |If so there is easy to maintain systemwide and distribute profiles in a corporate network. [131021890040] |This is what I want: [131021890050] |For an example, I want to use a gconf-key where some information is defined by environment variables, the key /apps/evolution/mail/accounts and $USER / $USERNAME: [131021890060] |[ $(USERNAME) $(USER)@example.com imap://$(USER)% 40example.com@imap.example.com/;;use_ssl=when-possible smtp://$(USER)%40smtp.example.com;;use_ssl=when-possible [131021890070] |, ] [131021890080] |I believe that I need a subsystem that processes template profiles into something gconfd can use. [131021890090] |I have tried desktop-profiles and sabayon without any luck. [131021890100] |Evoldap-backend works only for evolution and feels a little overkill even if I end up with LDAP / Gosa or LDAP / phamm for authentication. [131021890110] |Mail, IM and VoIP / Telepathy uses only information that is easy reachable from GECOS (/etc/passwd) and standard login environment. [131021890120] |It feels more robust to administer one systemwide template than a profile per user. [131021900010] |This should be solvable with a simple bash script using standard Linux tools: [131021900020] |
  • Check if the folder /home/"$USER"/.gconf/apps/evolution exists
  • [131021900030] |
  • If it doesn't exist, copy a template directory over
  • [131021900040] |
  • Replace the placeholders with sed.
  • [131021910010] |What's the best way to take a segment out of a text file? [131021910020] |What's a good way of extracting say, lines 20 -45 out of a huge text file. [131021910030] |Non-interactively of course! [131021920010] |you could try: [131021920020] |or [131021920030] |update: [131021920040] |As Mahomedalid pointed out, cat is not necessary and a bit redundant, but it does make for a clean, readable command. [131021920050] |If cat does bother you, a better sollution would be: [131021930010] |Actually, you don't need the cat, with: [131021930020] |or [131021930030] |it's enough [131021940010] |Even simpler: [131021940020] |The -n flag disables the default output. [131021940030] |The "20,45" addresses lines 20 to 45, inclusive. [131021940040] |And the "p" command prints the current line. [131021950010] |Since sed and awk were already taken, here is a perl solution: [131021950020] |Or, as pointed out in the comments: [131021970010] |Is my USB 2.0 device running at full speed? [131021970020] |How can I tell if my USB flash drive is running at full USB 2.0 speed? [131021970030] |Currently I'd just like to verify that the drive is plugged into a 480 Mbps port, but in the past I've been unsure if a given device was USB 2.0 or an older revision. [131021970040] |I'd like a way to identify its capabilities other than testing read/write speed. [131021980010] |You could check out [131021980020] |On my system I get for example this line (and much more information): [131021980030] |Port 6: 0000.0503 highspeed power enable connect [131021980040] |Alternatively you can issue a: [131021980050] |in one terminal and then plug in the USB flash drive. [131021980060] |On my system I get for example this message: [131021980070] |Sep 15 22:10:40 foo kernel: [405279.750140] usb 2-2: new high speed USB device using ehci_hcd and address 24 [131021980080] |ehci_hcd is the USB 2.0 controler module in the kernel (i.e. everything is ok, the USB 2.0 stick is used with the right speed). [131021980090] |Unless you get a follow up message like this: [131021980100] |Sep 15 22:15:25 foo kernel: [405564.451402] usb 6-2: not running at top speed; connect to a high speed hub [131021980110] |(in this case probably compatibility problem with a low quality USB 2.0 stick (Kensington), since the computer's USB port supports USB 2.0 high speed devices otherwise) [131021980120] |With a non USB 2.0 high speed stick I get this message: [131021980130] |Sep 15 22:15:25 foo kernel: [405564.300127] usb 6-2: new full speed USB device using uhci_hcd and address 4 [131021980140] |If your user account does not have access to /var/log/kern.log (or another system dependent kernel log message file location), you could use the command dmesg as an alternative to display the contents of the kernel message buffer. [131021980150] |The disadvantage of dmesg is just that it does print such nice time stamps and it does not have a follow output option. [131021990010] |What charset encoding is used for filenames and paths on Linux? [131021990020] |Does it depend on what file system I use? [131021990030] |For example, ext2/ext3/ext4 but also what happens when I insert one of those "joliet" CD-ROMs with ISO 9660? [131021990040] |I've heard that POSIX contains some sort of spec for the charset encoding of filenames? [131021990050] |Essentially, what I wonder is if I got a UTF-8 encoded filename, what processing/coversion do I need to do before I pass it to a file I/O API in Linux? [131022000010] |It depends on how you mount the file system, just take a look at mount options for different file systems in man mount. [131022000020] |For example iso9660, vfat and fat have iocharset and utf8 options. [131022010010] |The unix/posix layer of linux doesn't care which encoding you use. [131022010020] |It stores the byte sequence of your current encoding as-is. [131022010030] |I think those mount options are there to help you convert specific filesystems that define a charset to your system charset. [131022010040] |(CDROMs, NTFS and the FAT variants use some unicode variants). [131022010050] |I wish unix defined a system global encoding, but it is actually a per user setting. [131022010060] |So if you define a different encoding then your collegue, your filenames will show up differently. [131022020010] |As noted by others, there isn't really an answer to this: filenames and paths do not have an encoding; the OS only deals with sequence of bytes. [131022020020] |Individual applications may choose to interpret them as being encoded in some way, but this varies. [131022020030] |Specifically, Glib (used by Gtk+ apps) assumes that all file names are UTF-8 encoded, regardless of the user's locale. [131022020040] |This may be overridden with the environment variables G_FILENAME_ENCODING and G_BROKEN_FILENAMES. [131022020050] |On the other hand, Qt defaults to assuming that all file names are encoded in the current user's locale. [131022020060] |An individual application may choose to override this assumption, though I do not know of any that do, and there is no external override switch. [131022020070] |Modern Linux distributions are set up such that all users are using UTF-8 locales and paths on foreign filesystem mounts are translated to UTF-8, so this difference in strategies generally has no effect. [131022020080] |However, if you really want to be safe, you cannot assume any structure about filenames beyond "NUL-terminated, '/'-delimited sequence of bytes". [131022020090] |(Also note: locale may vary by process. [131022020100] |Two different processes run by the same user may be in different locales simply by having different environment variables set.) [131022030010] |Mono book recommendations [131022030020] |Does anyone know if there are any upcoming book releases for Mono/GTK#? [131022030030] |The only book I could find on Amazon with a decent rating is over 6 years old. [131022040010] |There is a page on the Mono site dedicated to books. [131022040020] |I hope you will find something useful there. [131022050010] |Problematic build script with quotes [131022050020] |Hi I'm trying to create a build script that executes these commands: [131022050030] |It looks like this: [131022050040] |The thing is, the echo command shows the exact configure command I want, but when running configure it doesn't so the right thing. [131022050050] |Something goes terribly wrong, after CFLAGS=-mtunecore2, when configure tries to use -flto (which is quoted!!) as an argument. [131022050060] |What am I doing wrong? [131022050070] |Thanks! [131022050080] |PS: I'm running MSYS, not real *nix... [131022060010] |Note that you don't define PREFIX in your script, so it needs to be defined elsewhere for this to work, but I think you want(Warning: I did no testing on this.): [131022060020] |Notice that when you are using double-quotes that the variable will be expanded within the string (often called variable interpolation). [131022060030] |Also, I believe you had mismatched quotes in your original script that were also likely causing you problems. [131022070010] |There are some purely Unixoid-shell issues here, I hope MSYS is not adding extra problems, too. [131022070020] |Doing multiple levels of quoting can be tricky. [131022070030] |I usually prefer to avoid them if possible. [131022070040] |The modification below uses the $@ parameter to store and retrieve the arguments that will have embedded whitespace. [131022070050] |If you need multiple such lists of arguments with embedded whitespace and your shell has array parameters you could those (since there is only one of $@). [131022070060] |For your stated requirements, you can get away with just this single level of protection, but if you needed multiple levels of quoting (e.g. if BUILD_CORES contained whitespace you would be dealing with multiple levels of quoting: one level of quoting in makeopts/BUILD_LFLAGS, and two levels of quoting in CONFIG_OPTS). [131022070070] |In that case I might resort to using printf with %q and eval. [131022070080] |The %q format specifier is available in most, but not all, shells (ksh, bash, zsh). [131022070090] |It quotes its value so that it can later be evaluated correctly. [131022070100] |In this particular case you would also need to take care that anything that gets LFLAGS from configure will know how to “read” the quoted value that the shell produced. [131022070110] |This could be a problem if the script is written in a shell language where %q produces a construct that the default shell on the system (i.e. /bin/sh) does not understand (e.g. ksh will sometimes produce $'blah' constructs, but some shells do not know how to parse them). [131022080010] |How to make vertical bar | work in grep? [131022080020] |Here's an example of a grep command line I've tried: [131022080030] |It doesn't produce any output, even though all 3 values are in the file multiple times. [131022080040] |I tried escaping the bars, and that improved things somewhat - the middle value was detected but not the first or last. [131022080050] |I also tried double escapes and went back to getting no results. [131022080060] |What am I doing wrong? [131022080070] |Thanks. [131022090010] |The syntax you are trying to use belongs to extended regular expressions, so the answer is very simple, either use egrep or include the -E flag. [131022100010] |What you are running into is the difference between basic and extended regular expressions. [131022100020] |From grep's manual page: [131022100030] |In basic regular expressions the meta-characters ?, +, {, |, (, and ) lose their special meaning; instead use the backslashed versions \?, +, {, \|, (, and ). [131022100040] |Thus, you can do one of the following: [131022100050] |or [131022100060] |The -E flag turns on "Extended" regular expressions. [131022100070] |GNU grep also supports perl-compatible regular expressions. [131022110010] |Applications that run on Mono in Ubuntu [131022110020] |Dear all. [131022110030] |I think Mono, and the C# language, are a great, nay, fantastic project. [131022110040] |My question is: how prevalent is Mono in Ubuntu? [131022110050] |How much of a penetration is it getting, and what applications run on it? [131022110060] |Thanks. [131022120010] |There are a good number of programs that use mono in Ubuntu if you look at the whole repository. [131022120020] |In the default install, I believe the following are the only mono apps: [131022120030] |
  • f-spot
  • [131022120040] |
  • gbrainy
  • [131022120050] |
  • tomboy
  • [131022120060] |There may be more, I just made this list from looking at which applications would be removed if I removed libmono*. [131022120070] |However, even just having these means that a good portion of the mono framework is installed by default which makes it very easy to deploy mono apps onto Ubuntu. [131022120080] |A few very popular Ubuntu applications are written in mono, including gnome-do, Banshee, and docky. [131022120090] |The trend I've seen from the sidelines is that despite its detractors, mono is gaining a lot of ground with Desktop application authors because of the speed at one can develop fairly rich GUI apps with the monodevelop IDE. [131022130010] |How to suspend and resume proccesses [131022130020] |In the bash terminal I can hit Control+Z to suspend any running proccess... then I can type fg to resume the proccess. [131022130030] |Is it possible to suspend a process if I only have it's PID? [131022130040] |And if so, what command should I use? [131022130050] |I'm looking for something like: [131022130060] |and then to resume it with [131022140010] |You should use [131022140020] |To be more verbose - you have to specify the right signal: [131022150010] |As maxschlepzig said you could use kill [131022150020] |and [131022160010] |Create directory in /var/run/ at startup [131022160020] |I'm using apache2 and postgres running on Ubuntu Server 10.04. [131022160030] |I have removed the startup scripts for both of these apps and I'm using supervisor to monitor and control them. [131022160040] |The problem I have run into is that both of these need directories in /var/run (with the correct permissions for the users they run under) for pid files. [131022160050] |How do I create these during startup as they need to be created as root and then chown'd to the correct user? [131022160060] |Edit It seems the best way to so this is to creat the directories with custom init scripts. [131022160070] |As I have no shell scripting skills at all how do I go about this? [131022170010] |I see a few options: [131022170020] |
  • Create the directories at install time
  • [131022170030] |
  • Create a file of all directories to be created at startup, then write a program that create all of those files and run it at startup (from a startup script that has root privs).
  • [131022170040] |Other options that are riffs on this could also be made to work. [131022180010] |According to Debian policy, [131022180020] |/var/run and /var/lock may be mounted as temporary filesystems, so the init.d scripts must handle this correctly. [131022180030] |This will typically amount to creating any required subdirectories dynamically when the init.d script is run, rather than including them in the package and relying on dpkg to create them. [131022180040] |Obviously, Ubuntu inherits from Debian, and as far as I know this policy is unchanged there. [131022180050] |The best solution is to modify your new startup scripts such that when the services are launched, if these directories do not exist, they will be created. [131022190010] |In reply to this comment: [131022190020] |There are currently no startup scripts for the servcies. [131022190030] |The supervisor daemon is started by the init.d scripts and then the other services are started by this service, which should not run as root. [131022190040] |If your supervisor is started from init.d script, then just create another init.d script with the preferences to be run before the supervisor starts ( how you achieve this is totally dependent on your flavor of **IX ). [131022190050] |In its start method create needed directories with required permissions. [131022190060] |In its stop method tear those directories down. [131022200010] |In the end I used this code in the init.d script for the supervisor process: [131022200020] |Which reads a file containing rows with the following format and creates the appropriate dirs and permissions: [131022200030] |path/to/dir:user:group [131022210010] |Given keys in ~/.ssh/authorized_keys format, can you determine key strength easily? [131022210020] |~/.ssh/authorized_keys[2] contains the list of public keys. [131022210030] |Unfortunately, each public key does not specify the key strength ( number of bits ). [131022210040] |Is there a utility that can process this file line by line and output the key strength? [131022210050] |I checked man pages for ssh-keygen, but it looks like it would only work with private keys. [131022210060] |Also, is there a tool that would output sha1 hash the same way as it is displayed in pageant Putty tool? [131022210070] |The format I am looking for: [131022220010] |ssh-keygen can do the core of the work (generating a fingerprint from a public key), but it will not automatically process a list of multiple keys as is usually found in an authorized_keys file. [131022220020] |Here is a script that splits up the keys, feeds them to ssh-keygen and produces the table you want: [131022230010] |Install development files locally to build on system without root access? [131022230020] |There is a server that I do work on, running an older version of Linux. [131022230030] |I don't have root access to the system, so I wanted to build a more recent version of a tool that I use a lot (Vim 7.3). [131022230040] |I figured I would just build it and install it in ~/bin. [131022230050] |However, it requires ncurses development files which are not installed system-wide. [131022230060] |I found the ncurses-devel rpm, and extracted the 'lib' and 'include' folders, where would I put them and how would I tell the ./configure script to find them so I could properly configure and build the package locally? [131022230070] |Edit: I ended up working around this by installing the identical OS in Virtualbox, and building the package there and copying over the binaries. [131022240010] |I did this quite frequently in my last job - the solution that seemed to work best was to create a ~/usr directory, and use the --prefix argument to point the ./configure scripts in the right direction. [131022240020] |Here's the steps: [131022240030] |
  • Create ~/usr directory, and include, lib and bin directories underneath it.
  • [131022240040] |
  • In your .profile, .bashrc, or other shell init script, add the following (or equivalent in your shell's dialect):
  • [131022240050] |
  • When building packages, use ./configure --prefix=/home//usr
  • [131022240060] |This arrangement worked for me for most situations where I needed to build things in userspace. [131022240070] |The hardest part is usually finding and building all the dependencies you need, but that just takes some googling or judicious use of your package manager's 'get source' functionality. [131022250010] |Normally you should be able to re-configure and change the code to define a new location in your home directory or other path for all libraries and programs... [131022250020] |But, IMHO, the easiest way (if you have plenty of space) is use chroot in a subdirectory with all a linux distro installed in it. [131022250030] |Of cuorse as a normal user you can not use chroot, but you can use these great tools: fakechroot and fakeroot [131022250040] |To create the chroot filesystem, I like to deploy a directory with Debian (or any Debian derivate like ubuntu) using the debootstrap utility. [131022250050] |So the procedure is easy (I will not enter in technical details, read the command manuals): [131022250060] |
  • Get and install: fakeroot, fakechroot and debootstrap tools
  • [131022250070] |
  • create a subdirectory using debootstrap
  • [131022250080] |
  • Execute: fakechroot fakeroot chroot mydebian-dir
  • [131022250090] |If you need more details, I think I had an script somewhere to do all this. [131022260010] |GUI for adding Fedora software source [131022260020] |Is there a GUI for adding additional software sources in Fedora (FC 13). [131022260030] |I have a software repository that works fine when added manually (as a .repo file in /etc/yum.repos.d/), but I'd like to have a better way of telling end users how to install. [131022270010] |PackageKit, the default package management tool in Fedora 13, does not include a method and it's unlikely that they ever will as it's a deliberate design choice to not include repository management. [131022270020] |However, could you instead package up the repo file into an RPM and distribute that? [131022270030] |By default RPMs will open with Package Installer and that's GUI based. [131022280010] |You can use yumex [131022290010] |why a wrong password gives a big delay [131022290020] |Hello, [131022290030] |I notice a weird (well, according to me) thing about passwords. [131022290040] |For example, if I type an incorrect password during login, there will be a few second's delay before the system tells me so. [131022290050] |When I try to sudo with a wrong password I would also have to wait before the shell says "Sorry, try again". [131022290060] |I wonder why it takes so long to "recognize" an incorrect password? [131022290070] |This has been seen on several distributions I use (and even OSX), so I think it's not a distribution specific thing. [131022300010] |This is a security thing, it's not actually taking long to realize it. [131022300020] |2 vulnerabilities this solves: [131022300030] |
  • this throttles login attempts, meaning someone can't pound the system as fast as it can go trying to crack it (1M attempts a sec? idk).
  • [131022300040] |
  • If it did it as soon as it verified your credentials were incorrect, you could use the amount of time it took for it to invalidate your credentials to help guess if part of your credentials were correct, dramatically reducing the guessing time.
  • [131022300050] |to prevent these 2 things the system just takes a certain amount of time to do it, I think you can configure the wait time with PAM ( See Michaels answer ). [131022300060] |Security Engineering ( 2ed, amazon | 1ed, free ) gives a much better explanation of these problems. [131022310010] |This is intentional, to try and limit brute forcing. [131022310020] |You can usually modify it by looking for the FAIL_DELAY configuration entry in /etc/login.defs and changing its value (mine is 3 seconds by default), although the comment in that file makes it sound like PAM will enforce at least a 2 second delay no matter what [131022320010] |Sync Palm Centro pictures on Linux [131022320020] |I have a Palm Centro, and I'd like to copy the pictures to my computer. [131022320030] |However, I'm using Ubuntu, and I don't want to switch to Windows or use Palm's horrible sync application (through Wine.) Is there a Linux application I can use to easily copy my pictures from my phone to my computer? [131022320040] |I'd prefer a simple command-line script to a monolithic productivity suite. [131022330010] |Have you attempted to use the "Palm OS devices" menu option in System -> Preferences? [131022330020] |(It may be in System -> Administration, I'm not in front of my Ubuntu boxes.) [131022330030] |(Full disclosure: This answer assumes that a/your Palm Centro doesn't run Windows Mobile.) [131022340010] |Download J-pilot. once installed, I googled "pictures and video plugin j-pilot." [131022340020] |Install it, restart, re-sync. [131022340030] |All of your pictures and videos will now be on your computer. [131022340040] |Magic. [131022350010] |Looking for ARM- or MIPS-based computer (netbook or similar size) to play with. Any recommendations? [131022350020] |I am bored and want to play with a netbook or other small computer. [131022350030] |Can anyone recommend something? [131022350040] |Requirements: [131022350050] |
  • ARM or MIPS CPU
  • [131022350060] |
  • Can run Windows CE 6 and/or Linux (preferably dual-boot)
  • [131022350070] |
  • Linux distribution must be full-fledged (for the architecture, i.e. no single-user toy GUI)
  • [131022350080] |
  • Costs less than 400 Euros (although a cool machine can cost more)
  • [131022350090] |
  • Wireless LAN, any speed
  • [131022350100] |
  • USB ports, 1.1 or 2.0
  • [131022350110] |
  • SD card reader would be nice but is not necessary
  • [131022350120] |
  • Slot for SIM card and built-in 3G would be nice but is not necessary
  • [131022350130] |
  • If netbook/laptop should run when lid is closed
  • [131022350140] |Any recommendations? [131022360010] |Take a look at the Always Innovating Touch Book. [131022360020] |It's only US $400, it uses an ARM TI OMAP3 chip, and it can run several Linux distributions (not sure about Windows CE). [131022360030] |It also has a couple nifty features, like a detachable touch-screen that functions as a tablet and motion-sensing capabilities thanks to a 3D accelerometer. [131022370010] |The Wikipedia page "Netbook" lists several ARM-based- and MIPS-based netbooks. [131022370020] |
  • "HP Compaq Airlife 100 ... for €230" is apparently "HP's ARM-powered Android netbook" slashgear.com/hp-compaq-airlife-100-arrives-in-spain-for-e230-2983611/ linuxfordevices.com/c/a/News/HP-Compaq-AirLife-100-on-US-website/
  • [131022370030] |
  • ARM netbook sells for $80 linuxfordevices.com/c/a/News/Menq-Easypc-E790/?kc=rss
  • [131022370040] |
  • "A Hong Kong-based manufacturer is shipping a Linux-based ultra-mini PC (UMPC) laptop for only $250 ... [131022370050] |Based on an "industry standard" RISC-based architecture (possibly MIPS?) the chip reportedly runs Windows CE as well as Linux." linuxfordevices.com/c/a/News/Worlds-cheapest-Linuxbased-laptop/
  • [131022370060] |You might also look at smartphones and PDAs that run Linux; practically all of them use ARM CPUs, and some version of Linux has been ported to many of them. [131022370070] |
  • Linux PDAs http://www.linuxfordevices.com/c/a/Linux-For-Devices-Articles/Linux-PDAs-PMPs-PNDs-and-other-Handhelds/
  • [131022370080] |
  • $330 Pandora linuxfordevices.com/c/a/News/CortexA8-gaming-handheld-runs-Linux/
  • [131022370090] |
  • $175 iKit linuxfordevices.com/c/a/News/Tiny-clamshell-PDA-runs-Linux/
  • [131022370100] |
  • Psion wikipedia.org/wiki/Psion#Psion_and_Linux
  • [131022370110] |
  • ... somewhere on the Internet I saw a "custom laptop" built out of a PDA running Linux, a full-size PDA keyboard, and a hinge and a few other things to let it fold and unfold like a full-size laptop. [131022370120] |For typing text and grepping through text, it ran for days between recharges. [131022370130] |Screen was a big small, though. ... [131022370140] |I wish I could find a link to it ...
  • [131022380010] |If I was bored, and looking to hack on an Arm-cortex based mini cool device, I'd be buying one of these: [131022380020] |http://openpandora.org/ [131022380030] |No touch screen though. [131022380040] |Since you also mentioned Sim cards, I think that a large cell phone or tablet, like the Nokia Internet Tablet would fit the bill. [131022390010] |What makes a portable mp3 player work well with Linux? [131022390020] |I'm in the market for a new portable mp3-player, and will be connecting it mostly to a Linux box, but occasionally to a Windows Vista machine as well. [131022390030] |I'm wondering what qualities I should be looking for in a music player that suggest good out of the box Linux support. [131022390040] |Having struggled to get my iPod to play nicely with Linux on a consistent basis, I'm hoping that I can find something that offers better native Linux support. [131022390050] |I've noticed a couple things in my search thus far: [131022390060] |
  • Ogg Vorbis support and "Linux Compatible" are highly correlated
  • [131022390070] |
  • Linux Compatible is often qualified with something like "support only for data transfer"
  • [131022390080] |Are these the only sorts of clues I'll be able to follow? [131022390090] |I'd appreciate any advice on what to look for, or examples of products that do it right. [131022400010] |If the player works as a usb-disk when connected to the PC (no need for special application to transfer) then it should be working on any platform supporting usb-disks. [131022400020] |Ogg Vorbis support is a plus regardlles of platform, but hardly a must for any (more a question what format your music collection is in). [131022400030] |Mp3 works just fine. [131022400040] |Personally don't see the point with a dedicated mp3 player, I'm using my mobilephone as mp3 player. [131022410010] |If you're not averse to installing a custom firmware I would look at the list of supported devices for RockBox. [131022410020] |This will let you add music to your mp3 player like it's an external storage device, and has great codec support such as OggVorbis, Flac, and many others. [131022410030] |I've used rockbox on my old iPod and it was fantastic, the navigation takes a little bit to get used to but it made my iPod a much more usable device. [131022420010] |To be crass and short: Not being made by Apple. [131022420020] |To elaborate: Your clues are pretty much spot-on. [131022420030] |USB Mass Storage ("Data Transfer Only") is generally all you need. [131022420040] |Generally with Linux I throw out the vendor-supplied software and just use the utilities shipped with my distro. [131022420050] |You can use Rhythmbox/Banshee/Amarok/etc., or you can use a file manager, or rsync, or whatever you choose, but they're ALL better than some proprietary music manager, unless you want DRM. [131022420060] |(Don't know why you would...) [131022420070] |If it says it works w/ Mac OS X "data transfer only", you can reasonably assume Linux and Unix (BSD's, etc.) will also work. [131022420080] |The only other nice thing is Firmware Upgrades On-Device. [131022420090] |The Sansa line can update either by unzipping an archive and dropping the files on the player or using their Sansa Updater utility. [131022420100] |I never needed to use the Sansa Updater when I was running stock firmware. [131022430010] |The music library program Songbird has a linux branch (that they cut official support to) that supports a pretty solid amount of PMPs/Smartphones (the Windows version has the Moto Droid for sure, i believe the linux version does too). [131022430020] |It helps in syncing both audio files on the phone with the desktop as well as playlists. [131022440010] |Linux Cell Phones? [131022440020] |I know of the FreeRunner, but are there any other Linux cell phones out there? [131022440030] |Are they any good? [131022450010] |All Android based phones are also Linux phones. [131022450020] |Android relies on Linux version 2.6 for core system services such as security, memory management, process management, network stack, and driver model. [131022450030] |The kernel also acts as an abstraction layer between the hardware and the rest of the software stack. [131022460010] |Nokia N900 is one of the Linux based phones I know. [131022460020] |It even has a terminal app out of the box to access shell! [131022470010] |Even older than the FreeRunner was the GreenPhone. [131022470020] |It ceased production in 2007. [131022470030] |The software did manage to live on as QtMoko/Debian for the FreeRunner. [131022480010] |Palm's WebOS phones are Linux powered as well. [131022480020] |They do not need to be rooted to gain access to the system. [131022480030] |WebOS has a very active home-brew community and many standard Linux packages available via optware. [131022480040] |I've got my Palm Pre set up as a web server, accessible via ssh, and even had samba running on it for a while. [131022480050] |Check out WebOS Internals. [131022490010] |Are they any good? [131022490020] |My answer is about Nokias and Androids. [131022490030] |I recommend you to wait with them until the problems fixed below. [131022490040] |Poor Keyboards with Nokias but not with Androids, at least G1. [131022490050] |Poor usability in both camps however will hinder your productivity [131022490060] |The family, N8XX and N9XX, has very poor keyboard designs -- darn hard to get even tilde and programming quotes -- that kills your productivity like hXll. [131022490070] |Android phones in conrast, such as G1, have much better keyboards but otherwise not as open as Nokia family, opennes here is a very subjective term -- however hard they market their phones with "open source", they are not. [131022490080] |Heard Nokia N900 is more open than N8XX but if I have understood right it still have some code like related to transmitter/antenna closed, check the current state from Freenode's Maemo channel, this can change like a windmill. [131022490090] |As for Androids, I tried everything like Cyanogenmod, Dev phones but just busy-box-cli-abstraction and multi-tasking commandline not possible (not in Androids and not in Nokias) -- again a blow to productivity. [131022490100] |Some infant problems with current "linux" phones [131022490110] |
  • native multi-tasking CLI (no abstraction pling-pling like busy-box), not the same as Nokias "GUI multitasking"-marketing-pling-pling
  • [131022490120] |
  • missing/implemented-poorly programs such as GNU Screen, Mutt, Vi, irssi and such basics (bad for productivity)
  • [131022490130] |
  • poor QWERTY keyboard with hard-to-use programmer-keys, please, no more display clicking like with Nokias
  • [131022490140] |
  • no native Debian or similar OS running, you need to box it at least with N900
  • [131022490150] |
  • not open and obfuscated code, like with Cyanogenmod's Nvidia driver (not verified just rumour in Freenode's #cyanogenmod, speculation)
  • [131022490160] |Cannot recommend any of the infant products, they are disgraceful in their usability and debatable openness. [131022490170] |You may like some of their features like SSH but you will encounter productivity problems. [131022490180] |I got rid of my Nokias, Androids, Cyanogen-mod-messes -- will go back if I can find a phone with fixed above problems. [131022490190] |Please, let me know if you know any phone that address the problems -- and seriously why the title is about "linux", I want BSD phone, any idea whether any OBSD phone planned or in production? [131022500010] |Linux Programmable Controller [131022500020] |Hi, [131022500030] |I'm looking for a programmable Linux controller fro home automation and general fun projects. [131022500040] |Requirements: [131022500050] |
  • Controlling electric appliances - On\Off switches and dimmers (perhaps using relays)
  • [131022500060] |
  • Receive analogue and digital data from sensors (switches, temperatures, etc.)
  • [131022500070] |
  • USB connection
  • [131022500080] |
  • Running Linux
  • [131022500090] |Advantages: [131022500100] |
  • Network connection \ Web interface
  • [131022500110] |
  • Python support
  • [131022500120] |
  • Small display screen
  • [131022500130] |
  • Keyboard and VGA support
  • [131022500140] |I used to have a lot of fun with a Handy Board, but it has broke down few months ago, and it lacks many vital features. [131022500150] |Any ideas for such a hardware device? [131022500160] |Adam [131022510010] |Not knowing your price range, I suggest Gumstix. [131022510020] |The boards are quite expensive, but very powerfull, especially with the ATmel Robostix expansion board. [131022510030] |I suggest the Robostix Starter Pack, this should get you going programming with Gumstix. [131022520010] |Is not so powerfull as a normal PC, but you should try arduino platform. [131022520020] |You can buy a great and cheap unit here: http://www.libelium.com/ [131022520030] |Google a little bit about arduino and you will find a lot of references and a big community [131022530010] |Diffing two big text files [131022530020] |I have two big files (6GB each). They are unsorted, with linefeeds (\n) as separators. [131022530030] |How can I diff them? [131022530040] |It should take under 24h. [131022540010] |The most obvious answer is just to use the diff command and it is probably a good idea to add the --speed-large-files parameter to it. [131022540020] |You mention unsorted files so maybe you need to sort the files first [131022540030] |you could save creating an extra output file by piping the 2nd sort output direct into diff [131022540040] |Obviously these will run best on a system with plenty of available memory and you will likely need plenty of free disk space too. [131022540050] |It wasn't clear from your question whether you have tried these before. [131022540060] |If so then it would be helpful to know what went wrong (took too long etc.). [131022540070] |I have always found that the stock sort and diff commands tend to do at least as well as custom commands unless there are some very domain specific properties of the files that make it possible to do things differently. [131022550010] |If someone like me wonders why <(cmd1) <(cmd2) syntax works (as it sounds like redirecting standard input twice!), try echo hello <(cmd1) <(cmd2). [131022550020] |You'll see something like hello /dev/fd/63 /dev/fd/62 which suddenly makes it clear ;) [131022560010] |Wireless multi function scanner / printer [131022560020] |Can someone recommend me a wireless multi function scanner / printer that will work with Ubuntu? [131022560030] |If something like this exists at all.. [131022570010] |I have had good success using HP and Epson printers. [131022570020] |Here is a list of supported Ubuntu printers/Multi-function printers, You could also check out Open Printing to find a supported Linux printer. [131022570030] |Good luck [131022580010] |Rsync filter: copying one pattern only [131022580020] |I am trying to create a directory that will house all and only my PDFs compiled from LaTeX. [131022580030] |I like keeping each project in a separate folder, all housed in a big folder called LaTeX. [131022580040] |So I tried running: [131022580050] |which should find all the pdfs in ~/LaTeX/ and transfer them to the output folder. [131022580060] |This doesn't work. [131022580070] |It tells me it's found no matches for "*.pdf". [131022580080] |If I leave out this filter, the command lists all the files in all the project folders under LaTeX. [131022580090] |So it's a problem with the *.pdf filter. [131022580100] |I tried replacing ~/ with the full path to my home directory, but that didn't have an effect. [131022580110] |I'm, using zsh. [131022580120] |I tried doing the same thing in bash and even with the filter that listed every single file in every subdirectory... [131022580130] |What's going on here? [131022580140] |Why isn't rsync understanding my pdf only filter? [131022580150] |OK. [131022580160] |So update: No I'm trying [131022580170] |And this gives me the whole file list. [131022580180] |I guess because everything matches the first pattern... [131022590010] |How about this: [131022600010] |If you use a pattern like *.pdf, the shell “expands“ that pattern, i.e. it replaces the pattern with all matches in the current directory. [131022600020] |The command you are running (in this case rsync) is unaware of the fact that you tried to use a pattern. [131022600030] |When you are using zsh, there is an easy solution, though: The ** pattern can be used to match folders recursively. [131022600040] |Try this: [131022610010] |Judging by the "INCLUDE/EXCLUDE PATTERN RULES" section of the manpage, the way to do this is [131022610020] |The critical difference between this and kbrd's answer is the --include="*/" flag, which tells rsync to go ahead and copy any directories it finds, whatever they are named. [131022610030] |This is needed because rsync will not recurse into a subdirectory unless it has been instructed to copy that subdirectory. [131022610040] |Also, note that the quotation marks prevent the shell from trying to expand the patterns to filenames relative to the current directory, and doing one of the following: [131022610050] |
  • Succeeding and messing up your filter (not too likely in the middle of a flag like that, though you really never know when someone will make a file named --include=foo.pdf ...)
  • [131022610060] |
  • Failing, and potentially producing an error instead of running the command (as you've discovered zsh does by default).
  • [131022620010] |You can use find and an intermediate list of files (files_to_copy) to solve your issue. [131022620020] |Make sure you're in your home directory, then: [131022620030] |find LaTeX/ -type f -a -iname "*.pdf" >files_to_copy &&rsync -avn --files-from=files_to_copy ~/ ~/Output/ &&rm files_to_copy [131022620040] |Tested with Bash. [131022630010] |Here is something that should work without using find. [131022630020] |The difference from answers already posted is the order of the filter rules. [131022630030] |Filter rules in an rsync command work a lot like iptable rules, the first rule that a file matches is the one that is used. [131022630040] |From the manual page: [131022630050] |As the list of files/directories to transfer is built, rsync checks each name to be transferred against the list of include/exclude patterns in turn, and the first matching pattern is acted on: if it is an exclude pattern, then that file is skipped; if it is an include pattern then that filename is not skipped; if no matching pattern is found, then the filename is not skipped. [131022630060] |Thus, you need a command as follows: [131022630070] |Note the "**.pdf" pattern. [131022630080] |According to the man page: [131022630090] |if the pattern contains a / (not counting a trailing /) or a "**", then it is matched against the full pathname, including any leading directories. [131022630100] |If the pattern doesn’t contain a / or a "**", then it is matched only against the final component of the filename. [131022630110] |(Remember that the algorithm is applied recursively so "full filename" can actually be any portion of a path from the starting directory on down [131022630120] |In my small test, this does work recursively down the directory tree and only selects the pdfs. [131022640010] |Rsync's filter rules can seem daunting when you read the manual, but there are a few simple principles that suffice in many cases: [131022640020] |
  • Inclusions and exclusions: [131022640030] |
  • Excluding files by name or by location is easy: --exclude=*~, --exclude=/some/relative/location.
  • [131022640040] |
  • If you only want to match a few files or locations, include them, include every directory leading to them (for example with --include=*/), then exclude the rest with --exclude='*'. [131022640050] |This is because:
  • [131022640060] |
  • If you exclude a directory, this excludes everything below it.
  • [131022640070] |
  • If you include a directory, this doesn't automatically include its contents. [131022640080] |In recent versions, --include='directory/***' will do that.
  • [131022640090] |
  • For each file, the first matching rule applies (and anything never matched is included).
  • [131022640100] |
  • Patterns: [131022640110] |
  • If a pattern doesn't contain a /, it applies to the file name sans directory.
  • [131022640120] |
  • If a pattern ends with /, it applies to directories only.
  • [131022640130] |
  • If a pattern starts with /, it applies to the whole path from the directory that was passed as an argument to rsync.
  • [131022640140] |
  • * any substring of a single directory component (i.e. never matches /); ** matches any path substring.
  • [131022640150] |
  • If a source argument ends with a /, its contents are copied (rsync -r a/ b creates b/foo for every a/foo). [131022640160] |Otherwise the directory itself is copied (rsync a/ b creates b/a).
  • [131022640170] |Thus here we need to include *.pdf, include directories containing them, and exclude everything else. [131022640180] |Note that this copies all directories, even the ones that contain no matching file or subdirectory containing one. [131022640190] |This can be avoided with the --prune-empty-dirs option (it's not a universal solution since you then can't copy a directory even by matching it explicitly, but that's a rare requirement). [131022650010] |The default is to include everything, so you must explicitly exclude everything after including the files you want to transfer. [131022650020] |Remove the --dry-run to actually transfer the files. [131022650030] |If you start off with: [131022650040] |Then the greedy matching will exclude everything right off. [131022650050] |If you try: [131022650060] |Then only pdf files in the top level folder will be transferred. [131022650070] |It won't follow any directories, since those are excluded by '*'. [131022660010] |Ubuntu: How can I disable boot and logout screen. [131022660020] |I want to be able to see all stuff from the boot process, just like in Debian. [131022660030] |I instaled startupmanager but It only change boot screen to more ugly. [131022660040] |I search google and find that you must remove "quiet" from /boot/grub/menu.lst but I don't have that file on my system. [131022670010] |It sounds like you are using Grub 2. I haven't tried it, but acording to this article. [131022670020] |You will need to change /etc/default/grub, so that the line : [131022670030] |becomes: [131022670040] |in other words, remove quiet and splash from the GRUB_CMDLINE_LINUX_DEFAULT variable. [131022670050] |update: from jcubic's comment: [131022670060] |also remove splash and vga=769 from the GRUB_CMDLINE_LINUX variable. [131022680010] |Is there any way to assign shortcuts to Nautilus bookmarks? [131022680020] |I have added several dirs as bookmarks to nautilus and I want to know if there a way to assign keyboard shortcuts to some of them. [131022690010] |Well, your bookmarks can be found in the file ~/.gtk-bookmarks so you could hack something together but there is no built-in way (according to docs etc) [131022700010] |You can go into Keyboard Shortcuts and create a new "custom shortcut" as "nautilus ~/Documents". [131022710010] |Audio recording does not work -- how to diagnose? [131022710020] |I used KRec to record some audio, but when I played it back I got nothing. [131022710030] |What are the things I need to do to figure out where the problem is? [131022710040] |My computer is a Dell OptiPlex DX620, Intel Pentium 4 3.00 GHz running Fedora 7 (Moonshine) [131022720010] |Did you remember to check the volume settings? [131022720020] |It looks like the program for this in KDE is kmix. [131022720030] |(Yes, I know this is like asking if you remembered to plug in your computer and connect your monitor when you complain that the screen is blank, but everyone misses those steps once in a while...) [131022730010] |You might also need to select the input device. [131022730020] |Those controls are usually in extended settings in your mixer. [131022740010] |How to track/fix a memory-related issue in a large C/C++ code on *nix systems [131022740020] |What strategy you use while tracking a memory-related issue? [131022740030] |What tools do you use (open source as well as proprietary) to identify memory leaks, memory corruption etc? [131022740040] |How would you track memory leaks if only gdb/dbx available on a system? [131022740050] |For me, fixing memory leaks with only a debugger is very hard. [131022750010] |valgrind is amazingly helpful. [131022760010] |If you can change the source code, Dmalloc is great; it will list which pointers were unfreed and (for code built with debugging symbols) exactly which line they were allocated on. [131022760020] |If you can't, Valgrind is pretty much the standard for that sort of thing. [131022760030] |I generally find Valgrind somewhat harder to use, but it has way more features and doesn't involve adding dmalloc calls into your code [131022770010] |For Solaris, there's several tools listed in the answers to this StackOverflow question (they include leak checking with other forms of bad memory access). [131022780010] |Hello, I have been doing Objective-C for some time, and there is an analyzer that deal with C level memory management and stuff like that. [131022780020] |Clang Static Analyzer is so good that Apple decided to bundle it with their xCode IDE. [131022780030] |I'm not sure if this is good for your question, but if you are doing C then it's worth a try. [131022790010] |Massif (from valgrind) is one of the best way to find memory leaks. [131022790020] |Repeat your suspicious code (or run your program long enough) and dump the result with ms_print. Usually, the call stack is giving you enough information to fix it. [131022790030] |With GDB, you can try to attach to a running program and call functions such as malloc_stats() [131022790040] |If your program is written in a different language, it might be more tricky. [131022790050] |Recently, some GDB has gained scriptability, and people started interesting projects such as gdb-heap, which can analyze Python memory from a core dump. [131022790060] |Similar memory analysys scripts might be possible for C++ objects. [131022790070] |Read also http://stackoverflow.com/questions/2564752/examining-c-c-heap-memory-statistics-in-gdb [131022800010] |Is there something that will generate keyboard's click sounds? [131022800020] |I miss using a clicky keyboard at work. [131022800030] |It's a fairly quiet office, so I'm stuck using a nearly silent keyboard. [131022800040] |The upshot is that I can wear headphones. [131022800050] |Is there something in Linux or X that can respond to all keyboard events with a nice, sharp click, giving me that audio feedback? [131022800060] |Before you think I'm crazy, I know some high-end keyboards even have speakers in them to reproduce this click for those who like the audio feedback. [131022800070] |I'm looking for something at the operating system level. [131022810010] |Per their docs, but it doesn't work for me on openSUSE 11.2 x86_64 [131022820010] |How do I setup Alpine to connect my Gmail account using IMAP? [131022820020] |I am using FreeBSD 8.1 and just installed Alpine email client. [131022820030] |I wonder if anybody knows how to setup Alpine to get mail from a Gmail account using IMAP. [131022830010] |For clarity, I'm just going to give the directions in terms of what you should add to .pinerc. [131022830020] |You can also set all of these setting using the configuration interface if you wish. [131022830030] |To get your mail via IMAP: [131022830040] |Include this to make sure you have access to all of the various gmail folders: [131022830050] |I find this useful to mimic "archiving": [131022830060] |To send mail via gmail, you need this in .pinerc: [131022830070] |Also, I find that these two settings improve performance a lot: [131022830080] |If you want alpine to remember your password for you, you can run this command in your home directory: [131022830090] |The first time you use alpine after running this command, you will be asked whether you want to save your password for later use each time you enter one. [131022840010] |rebuild auto-complete index (or whatever it's called) [131022840020] |After installing new software, an already opened terminal with zsh won't know about the new commands, and cannot generate auto-complete for those. [131022840030] |Apparently opening a new terminal fix the problem, but can the index (or whatever you call it) be rebuilt so that auto-complete will work on the old terminal? [131022840040] |I tried with compinit but that didn't help. [131022840050] |Also, is there a way that is not shell-dependent? [131022840060] |It's nice to have a way to verify the answer as well (except for uninstalling something and reinstalling it). [131022840070] |UPDATE: what I mean is after typing a few characters of a command's name, I can press tab, and zsh should do the rest to pull up the full name. [131022850010] |To rebuild the cache of executable commands, use rehash or hash -rf. [131022850020] |Make sure you haven't unset the hash_list_all option (it causes even fewer disk accesses but makes the cache update less often). [131022850030] |If you don't want to have to type a command, you can tell zsh not to trust its cache when completing by putting the following line in your ~/.zshrc¹: [131022850040] |There is a performance cost, but it is negligible on a typical desktop setting today. [131022850050] |(It isn't if you have $PATH on NFS, or a RAM-starved system.) [131022850060] |The zstyle command itself is documented in the zshmodule man page. [131022850070] |The styles values are documented in the zshcompsys and zshcompwid man pages, or you can read the source (here, of the _command_names function). [131022850080] |If you wanted some readable documentation… if you find some, let me know! [131022850090] |¹ requires zsh≥4.3.3, thanks Chris Johnsen [131022860010] |If you are having problems getting “argument completion” working for the new commands then compinit is probably the command you need, however it has a caching mechanism that might be causing your problem. [131022860020] |The documentation for my version (4.3.10) says that compinit uses a cached “dump file”, .zcompdump, to store compiled completion functions to speed up subsequent invocations. [131022860030] |It only invalidates the dump file when it notices a change in the number of completion files (fpath element files that start with #compdef … or #autoload …). [131022860040] |Presumably installing new software would change the number of such completion files (assuming that it also installed its zsh auto-complete files in the right place), so I would expect a plain compinit to work. [131022860050] |If you are in a situation where it does not work you may need to bypass or manually invalidate the dump file. [131022860060] |To skip using the dump file, use compinit -D; this will only affect the current shell. [131022860070] |To rebuild the dump file, remove it and rerun compinit: [131022860080] |This will affect the current shell, existing shells that run plain compinit, and any future shells. [131022870010] |Identifying files with special characters in its name in a terminal [131022870020] |Something strange happened today. [131022870030] |I was connected to an HP server with putty, and saw two files with exact same name: [131022870040] |-rw-r--r-- 1 hemantj 3368 Apr 1 12:47 test -rw-r--r-- 1 hemantj 20 Sep 1 12:47 test [131022870050] |It was very strange as you can't have files with the same name. [131022870060] |Later when I was connected with FileZilla I saw that there is some special character at the end of the second file name. [131022870070] |My Question: is it possible to highlight special characters in the file name in a terminal? [131022870080] |I am connected to HP-UX and my shell is tcsh and ksh. [131022870090] |EDIT: [131022870100] |Thanks all for your reply but I think Keymon's solution is the simplest and more portable. +1 for all of you -- I learned a lot from your responses [131022880010] |this is not a real highlighting, but it might help to identify the characters: [131022880020] |it might be constructed better. teach me ;) [131022890010] |The fact that you're on HP-UX is really the problem. [131022890020] |With the GNU tools available on most Linux distributions there are several options. [131022890030] |There is ls -q, which displays special characters as '?'. [131022890040] |Or -b which displays octal codes instead. [131022890050] |Another option is ls | cat -v. [131022890060] |Also find . -type f -ls, which displays using escape codes by default. [131022890070] |Some of those might work on HP-UX, but I don't know. [131022900010] |If you are lucky enough to have od, [131022900020] |ls -1 prints the filenames, separated by a newline. od -xC prints the input text in hexadecimal, with recognisable ascii characters displayed under each octet. [131022900030] |Sample output for three files, named 1, 2, 3: [131022900040] |Sample output for two files, named 1 and '\n2': [131022900050] |Notice that there are three newlines for two files, and lexographically, "\n2" sorted before "1". [131022910010] |I use this: [131022920010] |Cannot add ubuntu to the suse grub [131022920020] |I am running SuSE 11.2. [131022920030] |I also have windows and Ubuntu on the same machine. [131022920040] |The problem is that I cannot get Ubuntu to show as a boot option on SuSE's grub. [131022920050] |The Ubuntu partition is on /dev/sda5. [131022920060] |Here is my /boot/grub/menu.lst: [131022930010] |Did you try doing that from YaST->Boot Loader->Choose Image and then fill all the other options like your kernel image etc and you are done :) [131022940010] |I believe that the ubuntu options are located in /etc/grub.cfg. [131022940020] |It will take some parsing; that file on my system looks like a whole shell script, but the tail end looks like you should be able to parse it out sufficiently to get what you need. [131022950010] |Your syntax looks correct. [131022950020] |I have a few recommendations. [131022950030] |First try taking off the quiet splash. [131022950040] |Secondly I would list the full path to the kernel and initrd image. [131022950050] |Lastly, make sure you update grub. [131022950060] |Caveate: I don't use SUSE, I found that command here. [131022950070] |Hope this helps. [131022960010] |Try chainloading the version of GRUB Ubuntu ships with instead of using the same GRUB from SuSE: [131022960020] |This way Ubuntu will manage it's own GRUB configuration and kernel upgrades on it's own partition. [131022970010] |Line numbering in Vim [131022970020] |How do I enable line numbers to be displayed in Vim? [131022980010] |To enable line numbering: [131022980020] |To disable it: [131022990010] |And if you want it to apply to every vim you open...put "set number" in your .vimrc! [131023000010] |I find it extremely useful to have a binding to toggle line numbering (among other things). [131023000020] |It can be configured as follows [131023000030] |The first command will toggle the line numbering for the current buffer (local) and the second one will display the current value of the option on the status line. [131023010010] |Sound Problems with pulseaudio [131023010020] |The overall question that my whole problem is kind of based on is regarding sound in linux. [131023010030] |Should I be using alsa or pulse or what? [131023010040] |I'm using pulse at the moment and this is my problem: I was in the midst of configuring awesome and playing some music at same time when I restarted awesome and found sound no long functioned. [131023010050] |After debugging and frustration I restarted and sound still doesn't work. [131023010060] |When run in super verbose mode pulseaudio gives much output (see bottom of post) but I think the key line is this: [131023010070] |Any guesses? [131023010080] |Full log: [131023020010] |I see you are using Ubuntu. [131023020020] |I have a similar issue on my work Laptop where sometimes for no apparent reason the sound devices die, a restart fixes the issue. [131023020030] |What kind of soundcard are you using? [131023030010] |My guess (based on my solution to what I'm pretty sure is the same problem) is that some application is locking your sound device. [131023030020] |Flash and Pidgin have both been culprits on my gentoo box. [131023030030] |Try running "fuser /dev/snd/*", and killing any processes listed there. [131023030040] |This might be enough to get your sound working again. [131023030050] |(see also http://en.gentoo-wiki.com/wiki/PulseAudio#Troubleshooting) [131023040010] |Easy way to use hardware mixing? [131023040020] |My sound card supports hardware mixing, what's the easiest way to use this feature (rather than an app by app basis, hopefully)? [131023050010] |I am afraid there is no easy way. [131023050020] |It is probably easier to just use pulse audio for this. [131023050030] |Pulse audio does software mixing, but on current machines doing hardware mixing does not really provide advantages. [131023050040] |I you really want to use hardware mixing you have to look up some .asoundrc options and then use some alsa-pseudo-devices in your programs. [131023060010] |Trying to improve sound quality with ALSA. [131023060020] |I'm trying to make ALSA 1.0.23 to use different resampling algorithm. [131023060030] |I did some research on the Internet and found that putting the line defaults.pcm.rate_converter "" into either /etc/asound.conf or ~/.asoundrc will tell ALSA to use different resampling algorithm. [131023060040] |However, it doesn't seem to work. [131023060050] |Putting the following line into ~/.asoundrc defaults.pcm.rate_converter "speexrate_best" doesn't have any effect on either CPU usage or the list of loaded libraries (doing lsof -n | grep speex while playing something yields nothing). [131023060060] |Although, the following snippet forces ALSA to use new resampling algorithm: [131023060070] |Doing so makes CPU usage to 10-15% and makes two new shared libraries appear in the list of lsof, but software mixing stops working and I can't play multiple audio files. [131023060080] |I'm probably missing something obvious, but I'm complete noob. [131023060090] |What can be an issue here? [131023070010] |Looks like mplayer was doing resampling all the way long. [131023070020] |Playing some wav files with aplay shows that the new resampling algorithm is being used as intended. [131023080010] |What is the best tool/tools to record video from webcam on linux ? [131023080020] |My webcam is detected correctly (and i can use it in skype without any issues), but how can i record video on linux preferably with gui tool ? [131023090010] |You can use Cheese (GNOME) if you just need just that, or VLC for more advanced features. [131023100010] |Randomise or shred memory of an application [131023100020] |Is it possible to randomise or shred memory of a particular application just after its life ends, or better, whenever it deallocates some memory? [131023100030] |A command-line utility like this would be perfect: [131023100040] |shred-memory [options] [{params to the application...}] [131023110010] |Linux (like any modern multi-process OSes, I would hope) ensures that processes only get zeroed out pages when they allocate memory. [131023110020] |So a process cannot read the memory formerly used by another process. [131023110030] |Under Linux, the zeroing out happens when a page is allocated, not when it is freed. [131023110040] |This leaves two ways of reading the memory formerly used by a process: [131023110050] |
  • exploit a kernel bug
  • [131023110060] |
  • dump the contents of the RAM or swap (which requires root or physical access)
  • [131023110070] |A patch to the Linux kernel to allow zeroing out pages as soon as they are freed was proposed (sanitize_mem episode 1, sanitize_mem episode 2), but as far as I can tell not accepted. [131023110080] |In practice, the biggest window for attack is the swap space (which can retain data for a long time), and even that is not trivial for the attacker (who needs to steal the disk, and sort out the jumble of pages). [131023110090] |It's also the easiest to fix: encrypt the swap space with dm-crypt. [131023120010] |There are utilities for wiping memory, but it would be after the program in question had exited and freed up its memory. [131023120020] |I know of nothing that will specifically wipe the memory allocated by a particular program. [131023120030] |Check out "smem" (among others) from the "secure_delete" suite at thc.org for wiping free memory of a live system. [131023130010] |In addition to Gilles answer: [131023130020] |
  • You may lock page in-memory to prevent swapping (unless you know what you are doing - like storing password - it is bad idea and it may affect performance of system)
  • [131023130030] |
  • You may override free function to sanatize memory by hand using LD_LIBRARY_PRELOAD
  • [131023140010] |Why does Linux use a swap partition rather than a file? [131023140020] |It seems to me a swap file is more flexible. [131023150010] |A swap partition is preferred because it avoids the overhead of the file system when all you need is an addressable pool. [131023150020] |But nothing prevents you from using a swap file instead of a swap partition, or in addition to a swap partition. [131023150030] |
  • Create the file: [131023150040] |
  • Initialize file contents's: [131023150050] |
  • Use it: [131023150060] |
  • See if it worked: [131023150070] |In order to start using the swapfile always at bootup, edit /etc/fstab and add [131023150080] |[1] http://www.redhat.com/docs/manuals/linux/RHL-8.0-Manual/custom-guide/s1-swap-adding.html [131023160010] |I think that it is mainly because the access time to the datas located on a partition are lower. [131023160020] |The point of the swap file is more to help the sys admin when he is really out of RAM and needs to operate huge operations that would maybe crash his system. [131023160030] |In this case he will sporadically create swap files when needed. [131023160040] |But anyway you can have both of them. [131023170010] |A swap file is more flexible but also more fallible than a swap partition. [131023170020] |A filesystem error could damage the swap file. [131023170030] |A swap file can be a pain for the administrator, since the file can't be moved or deleted. [131023170040] |A swap file is also slightly slower. [131023170050] |The advantage of a swap file is not having to decide the size in advance. [131023170060] |However, under Linux, you still can't resize a swap file online: you have to unregister it, resize, then reregister (or create a different file and remove the old one). [131023170070] |So there isn't that much benefit to a swap file under Linux, compared to a swap partition. [131023170080] |It's mainly useful when you temporarily need more virtual memory, rather than as a permanent fixture. [131023180010] |Why do unix-heads say "minus"? [131023180020] |A couple of weeks ago I attended a talk on Git by someone who seemed to be from a Windows background. [131023180030] |I say "seemed to be" because he kept saying "dash" when referring to command-line options. [131023180040] |I then recalled something that I found curious in my early days of learning Linux; that is, when referring to options, the resident unix-heads always said "minus". [131023180050] |That is: [131023180060] |Would be said "arr em minus arr ef slash" as opposed to "arr em dash arr ef slash". [131023180070] |Why is this? [131023190010] |I've never seen anyone say "minus" outside of a math scenario (to mean subtract), "dash" is appropriate and will be more common, this isn't a Unix thing, this was just this one person. [131023190020] |We do have other lingo though, e.g. #! is pronounced shebang. [131023190030] |Here's a link to the current Jargon File for ASCII and how they're said [131023200010] |I am mainly a Windows guy (Don't down vote me here!) and I typically say "hyphen". [131023200020] |I guess it is just your experience with the people you deal with and how they were brought up. [131023200030] |Nothing specific about their computer background. [131023210010] |A lot of people actually say "tack." [131023210020] |(I'm watching a Hak5 video right now, and the host keeps saying "tack.") [131023210030] |The other more common pronunciations are "dash" and "hyphen." [131023210040] |I have never heard a Unix or Linux guy say "minus" in a command-line context. [131023220010] |The only time I use or hear minus instead of dash is when using chmod to remove a certain permission, e.g. [131023220020] |as the action in question can be considered subtraction. [131023230010] |Would be said "arr em minus arr ef slash" as opposed to "arr em dash arr ef slash". [131023230020] |Why is this? [131023230030] |I think this might be regional, or age related more than anything else. [131023230040] |Everyone said minus when I was in Uni. ... but then at that point all keyboards had a numeric pad, on the right, that had +-*/etc. [131023240010] |I have used Unix and GNU/Linux for many years, and talked about command-line operators out loud a heck of a lot and I have never heard anyone say "minus." [131023240020] |In fact, we don't even usually say "dash." [131023240030] |If I'm talking to somebody and the context is clear, I'll just speak the letters of the option; using your example [131023240040] |would be pronounced "arr em arr eff slash." [131023240050] |Of course, this is not something I usually tell someone how to do, so it would be more like [131023240060] |pronounced "arr em arr eff star." [131023240070] |Sometimes I say the dash, like [131023240080] |I would say "soo dough yum dash why update." [131023250010] |Maybe the difference is that the *nix guys learn their system more often from community where the people use not the perfectly correct terms (dash), but the simple terms (minus). [131023250020] |The users of proprietary software maybe get their knowledge more from trainings, etc. where the use of incorrect terms will result in unrelaxed customers bringing the video training back ;) [131023250030] |Btw: No statistical data were used except my personal stereotypes ;) [131023250040] |In german "minus" is "Minus" and "hyphen" is "Bindestrich", so much to complex to use it to say commands ;) [131023260010] |Well, for me "minus" is more natural, probably because I am not a native english speaker. [131023260020] |My native language in Hungarian, and minus = minusz, but hyphen = valasztojel, obviously "minus" is easier and shorter. [131023260030] |However I leave in Romania, and minus = minus, but hyphen and dash does not even have a one-word translations, so they would be very tedious to use. [131023270010] |I learned Unix in the AT&T System V days (1990), and it went like this: rm -rf /bin/nessus-fetch.rc was spoken as: arr emm minus arr eff slash bin slash nessus dash fetch dot rc where a minus was a argument indicator and a dash was part of a directory or file name [131023270020] |I've heard plenty of minus in my time, and usually the dash people were newbies, pronounced noo bees :-) [131023270030] |WAR [131023280010] |There are cases where command line + and - are definitely meant to indicate a subtraction or addition of something, like [131023280020] |or [131023290010] |On Windows, many command line options are / (eg. dir /?) so saying dash might too easily be confused with slash.... which is exactly like a lot of commands on Windows when you've installed some useful unix-y command line tools - I keep forgetting which ones use / and which use - ! [131023300010] |Personally, I pronounce rm -rf / as "NOOOO!!! [131023300020] |Don't do it!!!!" [131023300030] |;-) [131023310010] |I believe I qualify as a Unix head and I say dash because it has fewer syllables than minus or hyphen. [131023310020] |I'd like to read rm -rf / as "rum ruff slash" but I fear almost no-one would understand me. [131023320010] |I'm surprised that there isn't a definitive answer here. [131023320020] |Someone should do some historical spelunking and figure this one out. [131023320030] |Where and when did the "minus" or "dash" traditions start? [131023320040] |Good thesis topic :-) [131023320050] |I picked up saying "minus" from a bunch of kernel hackers I worked with at a certain company that had an OS that included major parts of BSD (that would be apple). [131023320060] |I always found that it tripped off the tongue much more easily than dash. [131023320070] |If I have a file name with a "-" in it I would never call it minus, I would call it dash. [131023320080] |Thus, I can easily differentiate in conversation between the arguments part of a command and the file name part. [131023320090] |It's pretty rare to include arithmetical expressions in shell commands, so confusion with math seems unlikely. [131023320100] |Based on the other answers here, it sounds like people who are old-time UNIX gurus, or like myself have hung around with old-time UNIX gurus, are more like to say "minus". [131023320110] |Thus my suspicion that there's an interesting historical story here. [131023330010] |reasons to say minus [131023330020] |The character you are typing is known as hyphen-minus, so either "hyphen" (-) or "minus" (-) are more correct than dash (). [131023330030] |The reason for saying "minus" rather than "hyphen" is probably twofold: [131023330040] |
  • fewer people know what a hyphen is
  • [131023330050] |
  • some utilities accept options starting with +, so it's logical to think of plus and minus
  • [131023330060] |Also, many word processing programs convert a double hyphen-minus (--) into a dash (), which could lead to confusion when discussing GNU long options, e.g. --help. [131023330070] |reasons to say dash [131023330080] |When you write - in a man page, it turns into a dash; you have to write \- to get a minus. man uses the roff/troff system, which was written by Brian Kernighan. [131023330090] |QED. [131023330100] |One syllable versus two. [131023330110] |Laziness wins. [131023340010] |copy recursively except hidden directory [131023340020] |How do I copy recursively like cp -rf *, but excluding hidden directories (directories starting with .) and their contents? [131023350010] |You could just copy everything with [131023350020] |and then delete hidden directories at the destination with [131023350030] |Alternatively, if you have some advanced tar (e.g. GNU tar), you could try to use tar to exclude some patterns. [131023350040] |But I am afraid that is not possible to only exclude hidden directories, but include hidden files. [131023350050] |For example something like this: [131023350060] |Btw, GNU tar has a zoo of exclude style options. [131023350070] |My favourite is [131023360010] |alternatively to cp you could use rsync with an --exclude=PATTERN. [131023370010] |Good options for copying a directory tree except for some files are: [131023370020] |
  • rsync: this is basically cp plus a ton of exclusion possibilities. [131023370030] |
  • pax: it has some exclusion capabilities, and it's in POSIX so should be available everywhere (except that some Linux distributions don't include it in their default installation for some reason). [131023380010] |From the Ubuntu Swap Faq: [131023380020] |Swap space is the area on a hard disk which is part of the Virtual Memory of your machine, which is a combination of accessible physical memory (RAM) and the swap space. [131023380030] |Swap space temporarily holds memory pages that are inactive. [131023380040] |Swap space is used when your system decides that it needs physical memory for active processes and there is insufficient unused physical memory available. [131023380050] |If the system happens to need more memory resources or space, inactive pages in physical memory are then moved to the swap space therefore freeing up that physical memory for other uses. [131023380060] |Note that the access time for swap is slower therefore do not consider it to be a complete replacement for the physical memory. [131023380070] |Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files. [131023390010] |Swap space is the area on a hard disk which is part of the Virtual Memory of your machine, which is a combination of accessible physical memory (RAM) and the swap space. [131023390020] |Swap space temporarily holds memory pages that are inactive. [131023390030] |Swap space is used when your system decides that it needs physical memory for active processes and there is insufficient unused physical memory available. [131023390040] |If the system happens to need more memory resources or space, inactive pages in physical memory are then moved to the swap space therefore freeing up that physical memory for other uses. [131023390050] |Note that the access time for swap is slower therefore do not consider it to be a complete replacement for the physical memory. [131023390060] |Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files [131023400010] |Buffer overflow attacks [131023400020] |I just learnt today in a Distributed Systems class that there is an address space allocated for every user by the kernel, and if a user wants to get the su privilege, he needs to shift to the kernel address space. [131023400030] |My question: is this the basis for buffer overflow attacks, where a user if exceeds his address space, may be able to access the kernel address space or just overwrite it? [131023410010] |No. Kernel space is protected on most architectures by processor and, unless there is an error in operating system, the attempt to write there would result in termination of process. [131023410020] |The buffer overflow attack (usually) overwrites the return address of function which breaks the control flow in hands of attacker. [131023420010] |A process cannot read or write or branch outside its memory. [131023420020] |That means it can't access the memory of other processes, or unallocated memory, or kernel memory. [131023420030] |So even if an attacker triggers a buffer overflow in a process and is able to execute arbitrary code in that process's context, that doesn't give the attacker kernel-level access. [131023420040] |There is one way out: processes can make system calls. [131023420050] |The exact manner in which system calls are made depends on the OS and the processor type; in its simplest form, the processor has a “system call” instruction. which branches to a particular address where the kernel has installed the system call handling code. [131023420060] |When the system call is performed, the processor changes the access permissions on memory. [131023420070] |This way the kernel runs with elevated privileges, including the ability to read, write and branch to all memory, and the ability to access hardware devices. [131023420080] |Again, the details of how the privilege elevation is performed depends highly on how the system call implemented on a particular platform; for example, the processor might keep two access control tables (one for user space and one for kernel space) and the system call instruction might switch between these two tables. [131023420090] |The kernel code that handles system calls decodes the arguments passed the process (how the arguments is passed is again highly platform-dependent). [131023420100] |It may happen that the system call does validate these arguments properly. [131023420110] |For example, if the kernel expects an array argument and does not check that the whole array fits into the process's address space, that may lead to an ordinary process performing a buffer overflow attack against the kernel, and being able to execute kernel code of its choosing. [131023420120] |For a remote attacker to gain kernel-level access generally requires exploiting two vulnerabilities, one in a networked process and one in the kernel. [131023420130] |Occasionally there is a bug in the kernel network processing code that permits a one-step attack. [131023430010] |initramfs image compression other than gzip [131023430020] |can the initramfs image be compressed by a method other than gzip, such as lzma? [131023440010] |Yes. [131023440020] |I use in-kernel initrd and it offers at least the following methods: [131023440030] |
  • None (as it is compressed with kernel)
  • [131023440040] |
  • GZip
  • [131023440050] |
  • BZip
  • [131023440060] |
  • LZMA (possibly zen-only)
  • [131023440070] |EDIT: You can use it on external file and with LZMA (at least on ubuntu). [131023440080] |EDIT 2: Wikipedia states that Linux kernel supports gzip, bzip and lzma (depending, of course, what algorithms are compiled in). [131023450010] |Where can I find sources for... [131023450020] |
  • Emacs lisp source code for .elc files? [131023450030] |e.g. cal-mayan.elc
  • [131023450040] |
  • Files in the /bin directory? [131023450050] |e.g. cat, split, and echo
  • [131023460010] |It depends a bit on what distribution you use. [131023460020] |On a debian style system you could do something like this: [131023460030] |The last command will fetch the source archive and all the patches which were used to build the binary package that includes the cat command. [131023460040] |Alternatively you could just google for it. [131023460050] |Or use even google code search. [131023470010] |For Emacs however, there is a special packet called emacs23-el under Ubuntu which includes all el-files of Emacs and installs them alongside the elc-Files. [131023470020] |This has the advantage, that you can directly look at function and variable definitions from Emacs. [131023470030] |Say you want to look at how the function string-insert-rectangle is defined. [131023470040] |You hit C-h f string-insert-rectangle and a Help buffer appears with documentation, telling that it is defined in rect.el. [131023470050] |If you click on rect.el Emacs automatically shows the function definition. [131023480010] |bash: array of string expanded to path? [131023480020] |suppose I have the following initialization of bash array [131023480030] |if I do iteration using : [131023480040] |then the content of the my_array is expanded containing path to any files with .so extension in /usr/lib/ directories. but I just wanted the array contain two string which is "/usr/bin" and "/usr/lib/*.so". [131023480050] |how should I do that? [131023490010] |or