[131055410010] |Lowercasing all directories under a directory [131055410020] |I want to lowercase every directories' name under a directory. [131055410030] |With which commands can I do that? [131055420010] |There isn't a single command that will do that, but you can do something like this: [131055420020] |If you need it to be robust, you should account for when there is already two directories that differ only in case. [131055420030] |As a one-liner: [131055430010] |All the directories at one level, or recursively? [131055430020] |

Zsh

[131055430030] |At one level: [131055430040] |Recursively: [131055430050] |Explanations: zmv renames files matching a pattern according to the given replacement text. -o-i passes the -i option to each mv command under the hood (see below). [131055430060] |In the replacement text, $1, $2, etc, are the successive parenthesized groups in the pattern. ** means all (sub)*directories, recursively. [131055430070] |The final (/) is not a parenthesized group but a glob qualifier meaning to match only directories. ${2:l} converts $2 to lowercase. [131055430080] |

Portable

[131055430090] |At one level: [131055430100] |The final / restricts the matching to directories, and mv -i makes it ask for confirmation in case of a collision. [131055430110] |Remove the -i to overwrite in case of a collision, and use yes n | for …. to not be prompted and not perform any renaming that would collide. [131055430120] |Recursively: [131055430130] |The use of -depth ensures that deeply nested directories are processed before their ancestors. [131055430140] |The name processing relies on their being a /; if you want to call operate in the current directory, use ./* (adapting the shell script to cope with . or * is left as an exercise for the reader). [131055430150] |

Perl rename

[131055430160] |Here I use the Perl rename script that Debian and Ubuntu ship as /usr/bin/prename (typically available as rename as well). [131055430170] |At one level: [131055430180] |Recursively, with bash ≥4 or zsh: [131055430190] |Recursively, portably: [131055440010] |I took this as a one-liner challenge :) First, establish a test case: [131055440020] |I use find to spot the directories with uppercase letters and then downcase them via sh -c 'mv {} echo {} | tr [:upper:] [:lower:]'. [131055440030] |Guess using sh -c is a bit hack-ish, but my head always explodes when I try escaping things for find directly. [131055440040] |Be warned: This solution does not check whether downcasing leads to collisions! [131055450010] |How do I get sar to show for the previous day? [131055450020] |on our servers, typing sar show's the system load statistics for today starting at midnight, is it possible to show yesterdays statistics? [131055460010] |Usually, sysstat, which provides a sar command, keeps logs in /var/log/sysstat/ or /var/log/sa/ with filenames such as /var/log/sysstat/sadd where dd is a numeric value for the day of the week (starting at 01). [131055460020] |By default, the file from the current day is used; however, you can change the file that is used with the -f command line switch. [131055460030] |Thus for the 3rd of the month you would do something like: [131055460040] |If you want to restrict the time range, you can use the -s and -e parameters. [131055460050] |If you want to routinely get yesterday's file and can never remember the date and have GNU date you could try [131055460060] |I highly recommend reading the manual page for sar. [131055470010] |sar - Collect, report, or save system activity information. [131055480010] |sar - Collect, report, or save system activity information. [131055490010] |svn and webserver files ownerships [131055490020] |I've an issue with files ownerships. [131055490030] |I have a drupal website and the "files" folder needs to be owned by "www-data" in order to let the users to upload files with php. [131055490040] |However I'm now using svn and I need all folders and files to be owned by my ubuntu user in order to work. [131055490050] |Files have 775 permissions, so both user and group have full access to the files. [131055490060] |I've added my ubuntu user to www-data group but it still doesn't work. [131055490070] |I dunno why. [131055490080] |I've tried to change the group to my ubuntu user group but it still doesn't work. [131055490090] |I've also tried to set my ubuntu user as owner, and www-data as group, but in this case the webserver php script cannot anymore upload files into the folder (and this is again strange because the group has full privilegies). [131055490100] |thanks [131055500010] |In any sort of web development, there are usually several good reasons to handle development differently from production. [131055500020] |It's convenient to have a local copy of your web site in your home directory for development, but accept that it's okay for some things to not work fully until you push the changes to the production environment. [131055500030] |You shouldn't be trying to tie the Drupal permission scheme and your user account permission scheme together. [131055500040] |Let the production Drupal stuff live in /var/www/html or wherever, with all the permissions that make Apache/PHP/Drupal happy. [131055500050] |Let your development tree be owned by you. [131055500060] |Whenever you finish work on a new feature for the web site, use some sort of synchronization tool to push the changes from dev to production, incidentally getting new permissions along the way. [131055500070] |Personally, I use rsync for this. [131055500080] |That has the nice advantage that the production server can be across the Internet, since tunneling rsync through ssh is trivial. [131055500090] |That's an important feature with web development, since there are usually good reasons to host your web site on a server not on your own LAN. [131055500100] |I use a command like this for my sites: [131055500110] |That command will make the files be owned by the www user on www.mysite.com, decoupling the dev permission scheme from production. [131055500120] |It sounds like you want to use SVN instead, which is fine. [131055500130] |You just have to make it so the dev and production boxes can both see the SVN server. [131055500140] |As with rsync, you want to be tunneling the SVN protocol through ssh here, for security. [131055500150] |When it's time to push a change to production, check it into SVN, then check it out on the production box. [131055500160] |Actually, I'd recommend using svn export instead of svn checkout to avoid scattering .svn subdirectories all over your web server tree. [131055500170] |The rsync method avoids that with --exclude rules. [131055510010] |What is the concept behind "tty" in linux? [131055510020] |Where did the terminology "tty" come from in Linux? [131055520010] |This is likely to be a reference to teletype terminals, which would have been used in the early(ish) days of computers. [131055520020] |http://en.wikipedia.org/wiki/TTY [131055530010] |As Danny has stated tty is teletype terminals. [131055530020] |The fact is that most of us have used it every time, but most of has not gone to understand it. [131055530030] |Here is a very good article which gives us a basic understanding of TTYs in Linux. [131055530040] |The TTY Demystified [131055530050] |Thanks, Sen [131055540010] |How to change the install path of my Linux Source tree? [131055540020] |I was trying to bring up my custom kernel. [131055540030] |I did the following : [131055540040] |I would like to change the install PATH. [131055540050] |How can i do that? [131055540060] |I tried doing [131055540070] |But then it is only creating vmlinux.bin (it is not creating the ramdisk image!) [131055540080] |But if I don't do that, make install will automatically create the ramdisk image in the default /boot folder. [131055540090] |How can i change that? [131055550010] |You use a separate tool to create an initrd image. [131055550020] |Most distros use a command called mkinitrd, but Fedora has a tool called dracut that is supposed to replace mkinitrd. [131055560010] |Yup, I found where is that install path. [131055560020] |It is inside /sbin. [131055560030] |The script file name is installkernel. [131055560040] |Just need to make a couple of changes in there and i could change the default install path of my Linux source(which was /boot). [131055570010] |How to add an iso image to grub2? [131055570020] |I read grub2 allows booting from an ISO image (at least for most live-cds). [131055570030] |How can I add such an entry to my grub configuration? [131055570040] |Please mention all files to modify and all commands to run since I haven't used grub2 before upgrading. [131055580010] |As far as I know while grub2 supports iso files and booting from CD it does not support loading systems from one yet. [131055580020] |cdboot module is compiled from cdboot.S which is support of booting from CD but not loading kernels (at least as of 1.98). [131055590010] |Network connection drops after a few seconds [131055590020] |I am on Debian. [131055590030] |I configured my NIC with a static IP (192.168.1.56). [131055590040] |When I try to connect to a network, initially with ifconfig eth2 I get (correctly): [131055590050] |but after a few seconds the 192.168.1.56 disappears and after some other seconds the inet6 address disappears too. [131055590060] |When I press in the nm-applet it requires a password, but in the meantime it tries to connect. [131055590070] |At my university, the connection is a DHCP one. [131055590080] |It works for the first few seconds but after it doesn't. [131055590090] |How do I go about fixing this? [131055590100] |Here it is the relevant part of the syslog: (static ip configuration) [131055600010] |simultaneously share /dev/videoX with multiple applications? [131055600020] |Hello, the goal is to use the same webcam for video chat apps and for home security at the same time. [131055600030] |Currently, the webcam is working just fine with either VLC (or mjpg-streamer) and with Kopete - just not simultaneously. [131055600040] |I am on Kubuntu 10.4 but at least one of these setups will be on Debian/Linux. [131055600050] |A GNU/Linux generic method would be best, but Debian/Linux specific (with udev?) would be just fine. [131055600060] |I have a custom udev rule to control naming of the webcam and I had tried adding "MODE = "0666"" and I have tired running Kopete as root after opening the device with VLC, a permissions angle might not the trick. [131055600070] |Any brilliant insights? [131055610010] |V4L2 API does not specify any sharing of one device between multiple applications. [131055610020] |It's not obvious how this is possible at a low level as each application may want to set different resolution/colorspace/etc. options. [131055610030] |But it ought to be relatively straightforward to modify something like v4l2vd to be the single reader of the actual hardware device and make multiple copies for multiple clients in userspace. [131055620010] |attach terminal to X desktop running in VM [131055620020] |At home I'm setting up a CentOS 5.5 server that will be running a bunch of KVM VMs. [131055620030] |Normally pressing the CTRL-SHIFT-Fn combination on the attached keyboard switches to terminals on the host machine. [131055620040] |What I'd like to do instead is have some number of CTRL-SHIFT-Fn combinations attach to the VMs that are running, in essence have the key combination behave like a KVM switch. [131055620050] |So for example, pressing CTRL-SHIFT-F1 displays a text terminal for the host machine, but pressing CTRL-SHIFT-F2 displays an X session that is running on one VM and pressing CTRL-SHIFT-F3 displays yet another VM terminal. [131055620060] |Some of the VMs will have X installed, so I'd like the solution to behave just like a 'normal' X session: Presents an X login screen if I haven't already logged in. [131055630010] |It would help if you tell what virtualization tool you use. [131055630020] |With VirtualBox, that would be quite easy using the internal rdp (or vnc with OSE edition) service. [131055640010] |I can see two ways of solving this: [131055640020] |
  • Set up multiple X sessions on tty2 through ttyN, all of which by default start the virt-manager and connect to the appropriate virtual machine, and run the console full screen.
  • [131055640030] |
  • Enable XDMCP in GDM on the virtual machines, allowing connections over the VM's private subnet. [131055640040] |Set up multiple X sessions on tty2 through ttyN, setting them use XDMCP to connect to the appropriate VM's X server.
  • [131055650010] |Does anyone know how to configure a Wacom Bamboo tablet to work left-handled? [131055650020] |I have my Wacom Bamboo tablet working fine under Fedora 14 but would like to switch it from right- to left-handed. [131055650030] |Anyone any ideas how? [131055650040] |Thanks for any help. [131055660010] |Use xsetwacom. [131055660020] |Basically, you'll want to list your current configuration, then re-configure the buttons to be the opposite way around, e.g. what button 1 did, button 4 should be (or whatever). [131055660030] |You could also try using xinput. [131055660040] |Something like: [131055660050] |Do you also need to remap the x and y axes? [131055660060] |Final thought: what happens if you go to System->Preferences->Mouse? [131055660070] |Maybe changing it to left handed has some effect? [131055670010] |`power/persist` file not found in USB device sysfs directory [131055670020] |The file /usr/share/doc/linux-doc/usb/persist.txt.gz mentions that the USB-persist capability can be enabled for a given USB device by writing 1 to the file persist in that device's directory in /sys/bus/usb/devices/$device/power. [131055670030] |This is said — if I understood correctly — to allow mountings of volumes on the drive to persist across power loss during suspend. [131055670040] |However, I've discovered that the device I'd like to enable this facility for — a USB hard drive — does not have such a file in its corresponding directory, and that attempts to create it are rebuffed. [131055670050] |Is there perhaps a kernel module that needs to be loaded for this to work? [131055670060] |Do I need to build a custom kernel? [131055670070] |I'm running ubuntu 10.10. [131055680010] |Is it possible to execute code in heap space? [131055680020] |I would like to know if i can execute a code piece sitting inside the heap space? [131055690010] |Maybe. [131055690020] |If the heap is executable, you can branch to that code. [131055690030] |But some unix variants make the heap space non-executable, so as to make exploits of some security vulnerabilities such as buffer overflows more difficult (then even if you can inject code into a program, you might not be able to branch to it). [131055690040] |(See the linked article for a discussion of unix variants and their configuration.) [131055690050] |Also some processor architectures have separate caches for code and data, so you may need to issue a cache flush instruction. [131055690060] |All in all, this isn't something you want to do by hand. [131055690070] |There is a standard unix API to load and execute code, which will do what it takes to make the loaded code executable: dlopen. [131055690080] |The code has to be loaded from a file. [131055690090] |Just-in-time compilers typically try to find faster interfaces than dlopen. [131055690100] |They have to manage the highly platform-dependent ways of ensuring code executability. [131055690110] |EDIT: Thanks to Bruce Ediger for reminding me of the need to flush the cache. [131055700010] |On some hardware (like HP's HP-PA CPUs) it's far more difficult, and on others (like DEC Alpha CPU) you have to do an instruction cache flush first, but yes, in general, you can execute code on the heap. [131055700020] |The following is a reasonably decent C language program that executes code "on the heap". [131055710010] |Delete extraneous files from dest dir via rsync? [131055710020] |Say I have [131055710030] |rsync -d --delete SRC:{*.jpg,*.txt} DEST [131055710040] |It doesn't remove hello.jpg from DEST, any idea how to archive this? [131055720010] |The reason your command isn't working is explained by the manual page for rsync (emphasis added): [131055720020] |--delete [131055720030] |This tells rsync to delete extraneous files from the receiving side (ones that aren’t on the sending side), but only for the directories that are being synchronized. [131055720040] |You must have asked rsync to send the whole directory (e.g. "dir" or "dir/") without using a wildcard for the directory’s contents (e.g. "dir/*") since the wildcard is expanded by the shell and rsync thus gets a request to transfer individual files, not the files’ parent directory. [131055720050] |Files that are excluded from the transfer are also excluded from being deleted unless you use the --delete-excluded option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section). [131055720060] |Thus, when you run [131055720070] |the unwanted files in DEST are not being deleted because you haven't actually asked for a directory to be synced, but just for a handful of specific files. [131055720080] |To get the results you desire, try something like this: [131055720090] |Notice that the order of the include and exclude directives matter. [131055720100] |Essentially, each file is checked against the include or exclude patterns in the order that they appear. [131055720110] |Thus, files with .jpg or .txt extensions are synced since they match the "included" patterns before they match the excluded "*" pattern. [131055720120] |Everything else is excluded by the --exclude '*' pattern. [131055720130] |The --delete-excluded option ensures that even excluded files on the DEST side are deleted. [131055730010] |Manupilating `/dev/video` [131055730020] |I'd like to take the video stream from /dev/video0, apply some effects or changes and make the result available on /dev/video1. [131055730030] |/dev/video0 ---> Apply Effects ---> /dev/video1 [131055730040] |For example, mplayer tv:// -vo caca will display the output of /dev/video in ascii art. [131055730050] |I would like to make that available on /dev/video1 so that I could send that through skype instead of my default webcam feed.... [131055730060] |Any suggestions? [131055740010] |For sure. [131055740020] |Here are two suggestions: [131055740030] |
  • Behind the scenes CLI. [131055740040] |Use V4L2VD to create a virtual video device such as /dev/videoVirt1 and pipe through mplayer for the effects. [131055740050] |Even some similar examples in the notes.
  • [131055740060] |
  • Use a fat desktop program such as webcamstudio to create the pipes and do your skype/broadcast wonders - still with mplayer for the ascii effect
  • [131055740070] |Good Luck! [131055750010] |Quickest way to change dir from /xxxxx/foo/yyyyyy to /xxxxx/bar/yyyyyy [131055750020] |Using bash, what is the easiest way to 'replace' a given part of the current path with something else? [131055750030] |If my current path is of the form /xxxxx/foo/yyyyy, how can I jump to the /xxxxx/bar/baz/yyyyy directory with the shortest command? [131055760010] |Not so short, but works: cd ${PWD/foo/bar\/baz} [131055770010] |The second one looks "cooler" but the first one is shorter. [131055780010] |Asuming you changed to /xxxxx/foo/yyyyy with cd /xxxxx/foo/yyyyy and want to change directly after this command to the other directory, you could use !!:s/foo/bar\/baz/. [131055780020] |Which means, repeat the last command, but replace foo with bar/baz. [131055780030] |When you executed some commands between the two cds, you could use !cd:s/foo/bar\/baz/. [131055780040] |Whichs means, repeat last cd command and replace. [131055780050] |For some more examples and history commands, take a look at the Bash Reference Manual. [131055790010] |You can leverage a shell function to provide you this ability as needed: [131055790020] |Which would be called from /foo/bar/ as: [131055790030] |Please note that the "'s are not required for this example, but would be required for special strings such as directories with space character(s) in the name. [131055800010] |To add to this concept of using the history, I recommend looking into the pushd and popd commands in bash, which can very easily switch between the stored directories. [131055800020] |The blog link is also a very nice resource for other methods of directory switching, including comments on how to do this within a script. [131055810010] |In zsh: cd foo bar [131055810020] |In bash: cd $(zsh -c 'cd foo bar') which can be shortened to $(zsh -c 'cd foo bar') under shopt -s autocd in bash ≥4. [131055820010] |Appending text to end of a textfile [131055820020] |How can I append a new line to a text file followed by current date and time? [131055830010] |If you want to have just in one line [131055840010] |How to understand the kernel panic core dump output? [131055840020] |This is the output I get when I run one of my applications: [131055840030] |Am I able to get some info from this dump regarding the issue which is making this happen? [131055850010] |Core dumps are easier to read if you can associate them with a symbol table. [131055850020] |That way a debugging tool can translate memory map addresses into mnemonics, i.e., data structures, function names, global variables and so on. [131055850030] |The call trace above, for one, would be a lot more helpful. [131055850040] |For a panic-induced dump, the usual way to read a core dump is to track back from the failure to a likely cause. [131055850050] |The most likely trail to follow is the process that was executing at the point of failure. [131055850060] |In most kernels, with the appropriate debug symbol information available, you can then step back, instruction by instruction, to find a bad value. [131055850070] |I'm not familiar with the presentation of this output, but it looks like maybe your kernel received an interrupt in a state in which it asserted no interrupt should arrive. [131055850080] |This kind of rule is a guard against computing on data when it doesn't appear to be safe to do so, so the kernel panics to guard against munging. [131055850090] |Just a guess on my part, though. [131055850100] |A lot of kernels run with all symbol information stripped to get as light as they can. [131055850110] |It looks like you'd have to compile these values into your kernel to get more easily digestible panic output. [131055860010] |Is 'Shut down computer when finished' during Avidemux encoding stage useful? [131055860020] |I get the option to Shut down computer when finished during encoding stage of Avidemux. [131055860030] |Why would I want to do that? [131055870010] |Maybe because you're encoding is going to take hours after you leave and you don't really want to keep the computer running when you're not using it to save power. [131055880010] |How to re-encode a DVD into a single file? [131055880020] |A DVD file layout has a directory named "VIDEO_TS" which contains a variety of files. [131055880030] |These files can include multiple video and audio channels, subtitles, and menu structure (UI). [131055880040] |Is there a format than can contain all of these, but in a single file? [131055880050] |Can this format do this without losing video quality? [131055880060] |What about retaining the menu? [131055880070] |Maybe even retaining subtitle and commentary tracks? [131055880080] |How do I do it (command)? [131055880090] |[note] These aren't hard requirements. [131055880100] |I'm just looking for something that can handle more than just audio/video. [131055890010] |The only way to do everything you ask is to rip a disk image of the DVD and then play the image. [131055890020] |Any other process which doesn't preserve the exact format of the disk will likely remove one or more of the features you're after because DVD is a very specific variant of the MPEG-2 standard. [131055890030] |DVD player programs —which you still have to use to interpret the menu information —often depend on their input following the standard closely. [131055890040] |There are lots of ways to rip a DVD image, enough that it's worth a separate question. [131055890050] |A quick and dirty way to do it is from the command line: [131055890060] |Obviously you must substitute the correct DVD drive dev node if yours isn't mounted on scd0. [131055890070] |Some media players can play ISO files directly, such as VLC and MPlayer. [131055890080] |If you want to use a player that can't open ISO files itself, you can mount an ISO disk image as a virtual DVD. [131055890090] |This is indistinguishable from a real DVD, from the player's perspective. [131055890100] |I assume you are using Linux, where you use the loop device for this: [131055890110] |Depending on the permissions on /dev/loop?, you might have to be root (or use sudo) to do the second step. [131055890120] |Having done this, you point your DVD player program at that virtualdvd directory. [131055890130] |Other *ixes usually have an equivalent mechanism to Linux's loop device, which may work a little differently. [131055890140] |EDIT: If you can give up the interactive menu requirement, there are lots of alternatives: [131055890150] |
  • MPEG-2: If you simply rip the VOB files from the DVD, you can concatenate them and feed them to anything that can play generic MPEG-2 files. [131055890160] |You lose menuing because the result no longer conforms to the DVD spec. [131055890170] |The data is still there in private sub-streams, but I'm not aware of any player that will interpret them when you reorganize the data this way. [131055890180] |Beware that simply blindly concatenating VOB files isn't what you want anyway. [131055890190] |You'll drag in things like the motion clip used behind the DVD menu, which is often stored as an independent VOB file. [131055890200] |Sorting out which data to pull from which VOB file is one of the services provided by a DVD ripping program.
  • [131055890210] |
  • MPEG-4 part 14: This is the container format usually used by H.264 files. [131055890220] |It is based on the QuickTime format (see next), and so it shares a lot of its power. [131055890230] |When you use a program like Handbrake to rip a DVD to H.264, this is what you get. [131055890240] |You can ask Handbrake to give you all the audio tracks, angles, etc. [131055890250] |Your ability to see all this is limited only by the capabilities of the program you use to play it.
  • [131055890260] |
  • QuickTime: Supports any number of tracks, including subtitles, but there is no standard way to translate DVD menus to something a QuickTime player could understand. [131055890270] |Apple's QuickTime player used to include Adobe Flash support, which could have been pressed into this use, but Apple dropped that feature years ago.
  • [131055890280] |
  • MKV: Like QuickTime, supports any number of audio and video tracks, plus subtitles.
  • [131055890290] |
  • Ogg: Pretty much the same as MKV, from a feature standpoint. [131055890300] |There are a bunch of niggly technical differences but they don't matter for your purposes.
  • [131055900010] |Checkout the ripper.sh script supplied with x264 transcode. [131055900020] |From the README: [131055900030] |To create an iso from a disk you own, edit DEFAULT_ISO_STORAGE and DEFAULT_DVD_RIPPING_DEVICE in the config and run: [131055900040] |The script takes care of 'unlocking' the drive, such that the resulting .iso does not include any CSS crap (just running dd is not enough). [131055900050] |With this method you get one file, retain the menu, all tracks and all subtitles. [131055910010] |VI mode in freetalk [131055910020] |I somehow managed to get my freetalk into vi-mode. [131055910030] |I'm just unsure how I did it (if I restart freetalk I get an emacs-mode). [131055910040] |Anyone an idea? [131055920010] |According to the freetalk documentation, it uses GNU Readline for its line editing features. [131055920020] |According to the GNU Readline documentation: [131055920030] |In order to switch interactively between emacs and vi editing modes, use the command M-C-j (bound to emacs-editing-mode when in vi mode and to vi-editing-mode in emacs mode). [131055920040] |The Readline default is emacs mode. [131055920050] |Perhaps you have accidentally hit this keyboard shortcut: M-C-j. [131055920060] |For those not familiar with the notation, this usually refers to ALT + CTRL + j. Pressing this key combination again should return you to emacs mode. [131055930010] |make my zsh prompt show mode in vi mode [131055930020] |I use bindkey -v (for bash-ers set -o vi I think that works in zsh too) or vi(m) mode. but it bugs me that I don't have any visual cue to tell me whether I'm in insert mode or command mode. [131055930030] |Does anyone know how I can make my prompt display the mode? [131055940010] |you can try VimMode [131055950010] |I found this via SU. [131055950020] |Here's the basic example, though I'm still customizing it for myself: [131055950030] |I'd explain it except I don't really understand it yet [131055960010] |You've already found zle-keymap-select which is executed whenever the mode changes. [131055960020] |You could use it to set some other visual indicator than the prompt, depending on what your terminal supports it (and your taste in mode indicator display, of course). [131055960030] |There is a standard terminfo capability to change the shape of the cursor. [131055960040] |However some terminals display the same cursor in both modes. [131055960050] |Xterm's notion of a less visible cursor is to make it blink (and this must be enabled with the -bc command line argument or cursorBlink resource). [131055960060] |With some terminals, you can also change the cursor color with print -n '\e]12;pink\a' (by color name) or print -n '\e]12;#abcdef\a' (by RGB specification). [131055960070] |These sequences are described in the xterm documentation, in the ctlseqs file; modern terminal emulators typically emulate xterm, though they might not support all its features. [131055970010] |Graphical boot-up screen lost after upgrading the kernel [131055970020] |I used the Ubuntu 10.10 alternate install CD to install Maverick to an encrypted partition on a USB stick. [131055970030] |This worked perfectly, but after the first cycle of updates that took the kernel from 2.6.35-22 to 2.6.35-24 I no longer get the graphical boot screen that asks for my passphrase. [131055970040] |Instead I get a similar looking one that uses ASCII art. [131055970050] |If I select the older 22 kernel from the bootloader, I still get the nice graphical screen. [131055970060] |What do I need to do to get the nicer boot-up interface with the newer 24 kernel? [131055970070] |Thanks, PaulH [131055980010] |Why do all my DNS queries resolve to 192.168.1.251? [131055980020] |All my queries on one machine on my network have suddenly started to resolve to 192.168.1.251. [131055980030] |This machine was used as the DNS server by other machines, so I noticed it as soon as it started happening and have switched all other machines to use 8.8.8.8 directly, which works. [131055980040] |The machines are all on a 192.168.0.x IP. [131055980050] |It does run dnsmasq, which I've restarted to no effect and so stopped, again, no difference. [131055980060] |/etc/resolv.conf did have entries for 127.0.0.1 and the router's IP, I've changed it to just contain one for 8.8.8.8 and there's nothing in /etc/hosts for a 192.168.1.251 IP. [131055980070] |Any ideas would be appreciated! [131055980080] |Edit: This seemed to start working without any sign of any changes having effected the behaviour. [131055980090] |I'm still none the wiser, but including the files below for completeness (and in case it comes back!) [131055990010] |Wild guess: Inappropriate wildcard DNS? [131055990020] |In the root hints? [131056000010] |One possible explanation could be iptables + dnsmasq (or other nameserver) gone rogue. [131056000020] |
  • bind, dnsmasq, some other nameserver are running locally replying 192.168.1.251 to everything you ask of it.
  • [131056000030] |
  • iptables rewrites outgoing packets on udp port 53 to end up somewhere locally where silly nameserver listens/ansers
  • [131056000040] |The reasoning behind this being: You use dig to troubleshoot and instruct dig with the @ option to go directly to 8.8.8.8, dig should now ignore nameserver(s) listed in /etc/resolv.conf. [131056000050] |As dig itself implements a resolver and generates queries itself, then sends it to 8.8.8.8 and parses/prints replies, this should eliminate anything funky in the configuration of libc's resolver on the box (which pretty much everything else uses). [131056000060] |This would suggest that: [131056000070] |
  • google have misconfigured their nameservers to give you bogus replies, which is unlikely
  • [131056000080] |
  • something along the way between you and 8.8.8.8 intercepts and redirects dns queries and generates a bogus reply, however since your other machines on the same network with their resolver pointed to 8.8.8.8 gets sane results, this isn't likely either
  • [131056000090] |
  • something on this computer intercepts outbound DNS queries and directs them elsewhere which generates silly/wrong replies
  • [131056000100] |So, I'd check if there's anything in the OUTPUT chain of the nat table redirecting dns traffic somewhere it shouldn't go ? (iptables -t nat -n -v -L OUTPUT). [131056000110] |You can reproduce this behavior with something along the lines of: [131056010010] |You're on a wifi hotspot and you haven't "agreed to the terms"? [131056020010] |What process created this X11 window? [131056020020] |Given an X11 window ID, is there a way to find the ID of the process that created it? [131056020030] |Of course this isn't always possible, for example if the window came over a TCP connection. [131056020040] |For that case I'd like the IP and port associated with the remote end. [131056020050] |The question was asked before on Stack Overflow, and a proposed method was to use the _NET_WM_PID property. [131056020060] |But that's set by the application. [131056020070] |Is there a way to do it if the application doesn't play nice? [131056030010] |If you have xdotool installed, then [131056030020] |xdotool selectwindow getwindowpid [131056030030] |followed by clicking on the window in question will return the PID. [131056030040] |(There are other ways of selecting the window in question, e.g., if you have its window ID you can just do xdotool getwindowpid . [131056030050] |You can also select by name or class, etc.) [131056030060] |I do think this requires some playing nice on behalf of the WM. [131056030070] |I haven't experimented much, or needed to. [131056040010] |The _NET_WM_PID isn't set by the window manager (as just another X11 client, how would it know?). [131056040020] |Instead, compliant X11 clients (applications) are expected to set _NET_WM_PID and WM_CLIENT_MACHINE on their own windows. [131056040030] |Assuming a well-behaved application, this will be true whether a window manager is running or not. [131056040040] |If WM_CLIENT_MACHINE is your own hostname, then the PID should be meaningful. [131056040050] |Otherwise, "I'd like the IP and port associated with the remote end" — I'm not sure what that means. [131056040060] |For example, if you have an ssh session open with X forwarding enabled, windows opened by forwarded apps will be marked with remote PID and hostname, but you don't necessarily have any way to connect back to that remote host. [131056050010] |Using zsh's line editor to wrap around subprocesses [131056050020] |Is it possible to use zsh's built-in line editor (zle) to feed input to a subprocess? [131056050030] |That is, I would like to run zlewrap mycommand where zlewrap is a zsh function and mycommand is any program that just reads lines from stdin; zlewrap would effectively provide zle's line edition capabilities to mycommand. [131056050040] |This is on the model of rlwrap which does just this, but with readline and not zle for line edition. [131056060010] |OCR on Linux systems [131056060020] |I have always found OCR technology to be behind on open source systems. [131056060030] |I've also watched the Ocropus project since its infancy. [131056060040] |I've tried what I've heard is the best OCR engine available for Linux, Tesseract, and have found it woefully lacking for business documents. [131056060050] |Are there any other more promising OCR implementations? [131056060060] |What about the even more hopeful goal for interpreting handwriting? [131056060070] |What is possible on *nix systems in this field? [131056070010] |I found a similar question over on StackOverflow and the Asprise OCR SDK, one of the linked commercial products, boasts a Linux version. [131056080010] |... [131056080020] |OCR is more than "only character recognition". [131056080030] |Image handling, preprocessing - page/layout analysis to find the texts, images, tables or barcodes. [131056080040] |For the recognition you have to dela with different fonts, sizes and languages. [131056080050] |This is important because to get good results you have to use dictionaries and language definitions. [131056080060] |Finally people expect more export options than text -e.g XML, RTF or searchable PDF. [131056080070] |There are some commercial options for SDKs, but they are not cheap and for free. [131056080080] |Recently I found a CLI OCR for Linux from ABBYY: http://www.ocr4linux.com/ - there is a free 100 page trial. [131056090010] |

    Selected questions

    [131056090020] |
  • Rsync filter: copying one pattern only
  • [131056090030] |
  • resume transfer of a single file by rsync
  • [131056090040] |
  • Sync a local directory with a remote directory in Linux
  • [131056090050] |

    See also

    [131056090060] |If rsync doesn't seem to be able to do what you want, also look under sync. [131056100010] |rsync is a tool to efficiently copy directory hierarchies, locally or remotely, with powerful filters to decide what gets copied [131056110010] |The kernel of every operating system builds the bridge between the application and the actual processing on the hardware level. [131056110020] |A typical UNIX kernel is responsible for: [131056110030] |
  • CPU Program execution
  • [131056110040] |
  • Memory Management
  • [131056110050] |
  • Processes (Scheduling, Synchronization, Interprocess Communication)
  • [131056110060] |
  • Signals (Exceptions, Interrupts)
  • [131056110070] |
  • Filesystems (Virtual, Block)
  • [131056110080] |
  • I/O Architecture (Devices, Files, Networking)
  • [131056110090] |The two most common architectures for UNIX kernels are: [131056110100] |
  • Monolithic kernel: Every kernel layer is integrated in into the whole kernel and therefore runs in kernel space. [131056110110] |Every user application has to access the kernel through a high-level interface. [131056110120] |Most UNIX(-like) kernel follow this approach.
  • [131056110130] |
  • Microkernel: Only the essential parts of the kernel run in kernel space. [131056110140] |Applications are allowed to directly address different kernel layers (device drivers, filesystems, ..).
  • [131056110150] |

    Linux

    [131056110160] |The linux kernel is a UNIX-like kernel initially created by Linus Torvalds in 1991 and now is maintained by developers around the world. [131056110170] |

    Linux kernel compilation

    [131056110180] |
  • Ubuntu Wiki Kernel
  • [131056110190] |
  • What is the benefit of compiling your own linux kernel?
  • [131056110200] |

    Linux kernel internals

    [131056110210] |
  • LDD = Linux Device Drivers: a book (on paper or free online) on Linux kernel internals
  • [131056110220] |
  • LWN = Linux Weekly News: kernel evolutions explained
  • [131056110230] |
  • LKML = the linux-kernel mailing list, a high-volume high-technicity discussion list (archives)
  • [131056110240] |
  • LXR = the Linux cross-reference: a nice way to browse the kernel source
  • [131056110250] |
  • Understanding the linux kernel source
  • [131056120010] |Everything about UNIX kernels: development, configuration, compilation, design, etc [131056130010] |zle - I cannot find why Ctrl+R does not work for non-root [131056130020] |The Ctrl+R works for root (well toor) however I cannot find why it does not work for user. [131056130030] |User .zshrc: [131056130040] |diff with root .zshrc: [131056130050] |Any ideas? [131056130060] |zsh version 2.3.11. [131056140010] |If you have $EDITOR = vi* or VISUAL = vi* when zsh starts up, zsh uses vi insertion mode as the default keymap. [131056140020] |Otherwise zsh uses emacs mode. [131056140030] |You presumably set EDITOR (or VISUAL) to vim in your init file, but have no such setting when running as root, so you're seeing the vi mode map, in which history search is on ^X r and ^X s. [131056140040] |Add bindkey -e to your .zshrc (or learn the vi map). [131056140050] |As usual, this is in the documentation (zshzle man page), but you have to know what you're looking for. [131056150010] |How to (safely) move /tmp to a different volume? [131056150020] |Today the /tmp directory filled up on a machine at work. [131056150030] |The problem was, it was on the root partition which wasn't very big. [131056150040] |In order to fix this, a co-worker created a /new/tmp directory elsewhere, copied all the contents to the new directory, removed the original /tmp and made a symlink /tmp -> /new/tmp. [131056150050] |When he copied the files (really, this was somebody else, not me!) he didn't use -a so the owner of every file under /new/tmp was root. [131056150060] |Furthermore, he didn't set the permissions of the /new/tmp directory so it was the default 0755. [131056150070] |This caused no end of trouble and even tweaking mode and ownership bits failed to restore the machine to an acceptably working state. [131056150080] |I ended up having to nuke everything in /tmp and reboot. [131056150090] |The /tmp directory contained various sockets and pipes and whatnot, since a bunch of people run Gnome through VNC, and I use screen which has its own pipes. [131056150100] |Is there a safe way to move a /tmp directory to a different volume on a running system? [131056150110] |I'm not sure what I would have actually done to keep everything working. [131056150120] |I'm particularly curious about what happens to pipes and sockets. [131056160010] |On “client” machines, the safe way to move /tmp is to reboot. [131056160020] |Here, by client, I mean anything that runs programs that put sockets in /tmp, in particular X servers and screen. [131056160030] |The new /tmp definitely needs to have the right permissions (1777), otherwise you can't hope to have a working system. [131056160040] |For /tmp, you pretty much can't copy any files. [131056160050] |That's because most of the time, programs that put stuff in /tmp open the files. [131056160060] |If you copy the file, that copies the contents, but the programs still have the old files open. [131056160070] |You might be able to reach into them with a debugger (ptrace), but this will be a lot more complicated than rebooting, and with many programs all you'd do is crash them anyway. [131056160080] |If your /tmp is full and you want to switch to a new one live, you need to restart all programs that have files open there. [131056160090] |Since that means restarting X and screen sessions, it's not much better than rebooting. [131056160100] |You should be able to switch for new programs but keep existing open files in place by using a union mount. [131056160110] |(The principle is sound, but I've never tried, so there may be by-me-unexpected issues.) [131056160120] |Here's a way to do this on Linux. [131056160130] |
  • Keep all existing files in /tmp except for a few manually-selected big ones.
  • [131056160140] |
  • Create a /tmp.new (mode 1777).
  • [131056160150] |
  • Expose /tmp on a different path: mount --bind / /.root.only. [131056160160] |This is necessary because the next step will shadow /tmp. [131056160170] |There may be different union mount implementations that don't require this step.
  • [131056160180] |
  • Make a union mount of /.root.only/tmp and /tmp.new, mounted on /tmp. [131056160190] |This way new files created in /tmp will be written in /tmp.new, but files in /.root.only/tmp are also visible under /tmp. [131056160200] |One possibility is unionfs-fuse: unionfs-fuse /tmp.new:/.root.only/tmp /tmp.
  • [131056160210] |If you don't want to go the union mount root (e.g. because it's not available on your platform, or because it's too much trouble), at least do not delete the old directory. [131056160220] |Move it, so that running programs will keep using the old directory and new programs will use the new one. [131056160230] |(Of course new programs won't be able to communicate with old programs through sockets or pipe in /tmp unless you set TMPDIR or otherwise tell them where to look.) [131056170010] |tmux vs. GNU Screen [131056170020] |Browsing through questions I found about tmux (I normally used GNU Screen). [131056170030] |My question is what are pros and cons of each of them. [131056170040] |Especially I couldn't find much about tmux. [131056180010] |tmux is fairly new compared with GNU screen. [131056180020] |Advantages / Disadvantages is a tough question, as both programs solve approximately the same problem. tmux is BSD licensed, however while screen is GNU GPL. [131056180030] |This matters to some people. [131056180040] |screen is more represented (on linux) at the moment, that is, you are more likely to find it on a given linux box than tmux. tmux is however more represented on OpenBSD as it is included as part of the base install, and his been for over a year now. [131056180050] |Both programs allow you to do about the same thing, though the how of things is a bit more complex than that. [131056180060] |Switching between the two is not overly complicated, as much of screens functionality has also found its way into tmux, though if you are a power user of either one, you will likely find some frustrations when switching to the other. [131056180070] |As with any program, it really depends on your needs, and which you are more comfortable with. [131056180080] |Give them both a try and see which you play nicely with. [131056180090] |For more info on tmux see: [131056180100] |http://tmux.sourceforge.net/ [131056190010] |The biggest difference in my use has been that in Gnu Screen you can only split frames horizontally, whereas in Tmux you can split both horizontally and vertically. [131056190020] |This is kind of a moving target, though as I here tell that vertical split is making it's way into screen. [131056190030] |Other then that, things are about flat. [131056200010] |I had troubles getting screen to support utf-8 and 256 colors but tmux worked out of the box. [131056210010] |From their website: [131056210020] |
  • How is tmux different from GNU screen? [131056210030] |What else does it offer?
  • [131056210040] |tmux offers several advantages over screen: [131056210050] |
  • a clearly-defined client-server model: windows are independent entities which may be attached simultaneously to multiple sessions and viewed from multiple clients (terminals), as well as moved freely between sessions within the same tmux server;
  • [131056210060] |
  • a consistent, well-documented command interface, with the same syntax whether used interactively, as a key binding, or from the shell;
  • [131056210070] |
  • easily scriptable from the shell;
  • [131056210080] |
  • multiple paste buffers;
  • [131056210090] |
  • choice of vi or emacs key layouts;
  • [131056210100] |
  • an option to limit the window size;
  • [131056210110] |
  • a more usable status line syntax, with the ability to display the first line of output of a specific command;
  • [131056210120] |
  • a cleaner, modern, easily extended, BSD-licensed codebase.
  • [131056210130] |There are still a few features screen includes that tmux omits: [131056210140] |
  • builtin serial and telnet support; this is bloat and is unlikely to be added to tmux;
  • [131056210150] |
  • wider platform support, for example IRIX and HP-UX, and for odd terminals.
  • [131056220010] |One difference is in how the two act when multiple terminals are attached to a single session. [131056220020] |With screen, each attached terminal's view is independent of the others. [131056220030] |With tmux, all attached terminals see the same thing. [131056220040] |Say you have two terminals attached to a single tmux session. [131056220050] |If you type ^B 1 into one terminal, the other terminal also switches to window 1. [131056220060] |When you have two terminals attached to a single screen session, and you type ^A 1 into one, this has no affect on the other terminal. [131056220070] |This is based on my experience with tmux 1.2; I see 1.3 is out but I didn't notice anything in the changelog about this behavior changing. [131056230010] |I will take the liberty of adding one difference: [131056230020] |tmux is ncurses based while screen does not draw additional elements. [131056230030] |If someone use terminal emulator that supports scrolling (s)he will get scrolling with screen but not with tmux (at least in default configuration). [131056230040] |The same thing applys for searching and similar features. [131056240010] |Drench raises an interesting point - the default behavior of connecting twice to the same session is different in tmux. [131056240020] |However, if you want to attach twice and have an independent view of the windows in that session - start tmux with [131056240030] |That will create a new session for you, and attach the windows from the already existing session. [131056240040] |If you didn't name your first session, you can add one with 'rename-session'. [131056250010] |Count how many times each line appears in a file [131056250020] |Say I have a file which contains: [131056250030] |I want to have the output like this: [131056260010] |I figured it out; one of uniq's options is -c, for "prefix lines by the number of occurrences": [131056270010] |I just came here with a similar problem. [131056270020] |From this, I managed to put together a slightly more advanced command, which I hope is useful for others. [131056270030] |As Steven D said in the comments above uniq only counts adjacent repeat lines, so you need to sort the lines first. [131056270040] |After that we find the unique lines then sort again so the most occurring lines are on top. [131056270050] |Output is redirected into the file output.txt. [131056270060] |If you just want to view results on the command line, remove the redirection and change the last command to sort -n so that the most common line will be at the bottom, i.e. definitely still on screen. [131056280010] |Bash Conditional Statements [131056280020] |What are the three formats of conditional statements used in bash scripting? [131056290010] |Well, I'm not really sure what the question is asking either, but I think that's not really relevant. [131056290020] |As I understand it, the StackExchange take on answering homework-ish questions is that the answers should be generally useful, i.e. should be able to serve as a reference for people who are not just trying to answer a very specific question. [131056290030] |The relevant section of the bash manual lists 5 different constructs which can be used for conditional evaluation. [131056290040] |Of these, the if, case, and [[ .. ]] constructs are probably the most commonly used in real-world code, though the (( .. )) construct will get used frequently in scripts that do complex counting or other numerical operations. [131056290050] |But they don't mention another very common form used for branching in bash, which is to just execute a command in combination with short-circuit evaluation, for example [131056290060] |&&and || are logical operators: "and" and "or" respectively. [131056290070] |This is effectively the same as [131056290080] |but works differently: in the first form, the conditional logic really comes about as a side effect of the actual explicit goal of the code, which is, at least ostensibly, to evaluate the logical operators. [131056290090] |This sort of construct is also commonly seen in C and JavaScript code. [131056290100] |It works by taking shortcuts: the parts of the expression, e.g. x &&y || z, are evaluated left-to-right. [131056290110] |If the left side of a logical and, e.g. the x in x &&y, evaluates to false, bash doesn't bother evaluating the y part because at that point it already knows that x &&y is false. [131056290120] |The converse is true for logical ors: true || z always evaluates to true, no matter what z turns out to be. [131056290130] |So if the grep comes out false, bash can skip that first echo, because that part of the expression can't possibly be true. [131056290140] |So it moves on to the second echo. [131056290150] |On the other hand, if the grep is true, the first echo is executed (resulting in those words being, well, echoed..). [131056290160] |At this point bash is done, because whatever the value of the right-hand side of the ||, the result of the expression will still be true. [131056290170] |To get a feel for the way bash works with conditionals, it's best to do some experimentation on the command line. [131056290180] |The command [131056290190] |will echo the return code of the previous statement. [131056290200] |This is an error code, so 0 means true, and a non-zero value means false. [131056290210] |It's a bit confusing at first, but you get used to it. [131056290220] |So you can do, for example [131056290230] |and perform similar tests with the [[ .. ]] construct to get a feel for these basic building blocks. [131056290240] |Once you've got that done, move on to if and then case statements. [131056290250] |As for the answer to your question, well, I think that's very dependent on the context it was asked in. [131056290260] |But getting a good understanding of the various options should help you figure out what they are looking for there. [131056300010] |What are shmpages in laymans terms? [131056300020] |What exactly are shmpages in the grand scheme of kernel and memory terminology. [131056300030] |If I'm hitting a shmpages limit, what does that mean? [131056300040] |I'm also curious if this applies to more than linux [131056310010] |User mode processes can use Interprocess Communication (IPC) to communicate with each other, the fastest method of achieving this is by using shared memory pages (shmpages). [131056310020] |This happens for example if banshee plays music and vlc plays a video, both processes have to access pulseaudio to output some sound. [131056310030] |Try to find out more about shared memory configuration and usage with some of the following commands: [131056310040] |Display the shared memory configuration: [131056310050] |By default (Linux 2.6) this should output: [131056310060] |shmmni is the maximum number of allowed shared memory segments, shmmax is the allowed size of a shared memory segment (32 MB) and shmall is the maximum total size of all segments (displayed as pages, translates to 8 GB) [131056310070] |The currently used shared memory: [131056310080] |If enabled by the distribution: [131056310090] |ipcs is a great tool to find out more about IPC usage: [131056310100] |will output the shared memory usage, you can see the allocated segments with the corresponding sizes. [131056310110] |shows more information about a specified segment including the PID of the process creating (cpid) and the last (lpid) using it. [131056310120] |ipcrm can remove shared memory segments (but be aware that those are only get removed if no other processes are attached to them, see the nattach column in ipcs -m). [131056310130] |Running out of shared memory could be a program heavily using a lot of shared memory, a program which does not detach the allocated segments properly, modified sysctl values, ... [131056310140] |This is not Linux specific and also applies to (most) UNIX systems (shared memory first appeared in CB UNIX). [131056320010] |How to change what Alt+F2 calls in GNOME? [131056320020] |I saw something that made me salivate and I want it to be the program that is run when I do Alt-F2. [131056330010] |Open System -> Preferences -> Keyboard Shortcuts. [131056330020] |Disable (or reset) the Show the panel's "Run Application" dialog box. [131056330030] |Now Add a new shortcut and set Alt+F2 to the command you would like to start. [131056340010] |What does size of a directory mean in output of 'ls -l' command? [131056340020] |What does size of a directory mean in output of ls -l command? [131056350010] |This is the size of space on the disk that is used to store the meta information for the directory (i.e. the table of files that belong to this directory). [131056350020] |If it is i.e. 1024 this means that 1024 bytes on the disk are used (it always allocate full blocks) for this purpose. [131056360010] |Ubuntu: On a network with many clients there are two machines that can't access the web via a browser at the same time [131056360020] |Ok I'm pulling my hair out over this one. [131056360030] |We have a wireless network with many clients all working well except two Ubuntu clients running 10.10 that can't access the internet via a browser at the same time. [131056360040] |They can both still ping, use Skype etc but can't browse. [131056360050] |As soon as the one that can browse exits the network browsing returns for the other and vice versa. [131056360060] |As ping and Skype was working I assumed some kind of DNS problem but moving over to OpenDNS didn't solve it, nor did restarting networking or using wired rather than wireless. [131056360070] |We also switched out the router, and it still persisted so I'm sure this isn't a network issue. [131056360080] |The two clients are both laptops and work fine together on a wireless network at another office (which we don't control). [131056360090] |I'm thinking something must be cached from the other network they both use that's causing this but have no idea what. [131056360100] |Does anyone have any ideas? [131056360110] |I just don't know where to go from here. [131056370010] |Difficult to say, given the limited amount of information available. [131056370020] |Here's a couple of random suggestions. [131056370030] |
  • Check that proxy settings are correct (similar to working laptops).
  • [131056370040] |
  • Check that Proxy Auto-Discovery works the same for your two browsers, as it does for browsers on other laptops on the same network. [131056370050] |Auto-discovery happens via DNS in Firefox, Internet Explorer supports both DNS, DHCP (via an INFORM request for option 252) and Group Policy distribution of proxy settings. [131056370060] |(Maybe IE supports one additional method, I can't recall at the moment.)
  • [131056370070] |
  • Use a sniffer like tcpdump or Wireshark to figure out exactly what is going on. [131056370080] |If you're not sure how to interpret the raw packets, the additional information might be useful to add to this question.
  • [131056380010] |My guess is, those two machines share something that should have been unique. [131056380020] |You should check their hostnames and IP addresses, then change them appropriately. [131056390010] |From the packet dump, it looks like Wireshark thinks that a number of frames are not in Ethernet v2 format. [131056390020] |A guess could be that one of the laptops is bonding two 54mbps channels into 108mbps using 802.3ad channel bonding (called Super-G mode when it's over WiFi in the original Atheros implementation, D-Link licensed the technology and probably called it something else) and that this is somehow failing. [131056390030] |Notably, a few packets are sent in normal framing format, namely DNS traffic, which also happens to work fine. [131056390040] |One idea could be to disable Super-G on either or both sides and see how that turns out. [131056390050] |Also, a raw dump rather than ASCII for machine2 would be nice ;-) [131056400010] |jetty repositories for Debian lenny 5.0? [131056400020] |This is my current sources.list, and I wish to install libjetty-java, libjetty-extra-java, and jetty, in that order. [131056400030] |However, the packages are not found, and I resorted to downlaoding the debs from http://dist.codehaus.org/jetty/deb/ and fetching the dependencies viz. libslf4j-java libservlet2.5-java manually. [131056400040] |My question is, is there a debian repository for jetty? [131056400050] |If not, will the above method beproblematic in the long run? [131056400060] |I ask, because I wont be eligible for automatic upgrades and the machine will be a production server. [131056400070] |Thanks. [131056410010] |Jetty is in the Debian repositories, but at the moment only in the testing distribution, not in the stable distribution which is what you have. [131056410020] |It looks like jetty doesn't have many dependencies that are not in lenny (stable), so a viable option is to keep your lenny system, but install a few binary packages from squeeze (testing). [131056410030] |This is viable only if the testing packages don't depend on having recent (post-stable) versions of libraries. [131056410040] |In particular, native executables are usually out since they require upgrading the the C library. [131056410050] |Add squeeze repositories to your sources by putting these lines in a file /etc/apt/sources.list.d/squeeze.list: [131056410060] |Then you'll be able to install packages from squeeze. [131056410070] |But don't stop there, otherwise the next time you run apt-get upgrade, your system will become (almost) all-testing. [131056410080] |Create a file /etc/apt/preferences containing the following lines: [131056410090] |Then packages from testing have a priority of 200, which is less than the default (500). [131056410100] |So a package from testing will be installed only if there is no package with the same name in stable. [131056420010] |For deploying a public-facing webapp, which between Testing and Stable to use some weeks before release? [131056420020] |Which do you recommend? [131056420030] |Some POVs to consider: [131056420040] |
  • stability
  • [131056420050] |
  • available packages
  • [131056420060] |
  • life
  • [131056420070] |
  • kernels
  • [131056420080] |
  • or any other reasons?
  • [131056420090] |[note] This question was originally a request for recommendation between Debian 5 "lenny" and Debian 6 "squeeze". [131056420100] |I modified it to make it more generic. [131056430010] |Go for Debian Testing: [131056430020] |
  • life: Official support for Debian releases end a year after a new one has been released. [131056430030] |So if you go for Debian Stable, you only have a year from next release before needing to upgrade.
  • [131056430040] |
  • stability: At the time of writing, the soon-to-be Debian 6 "squeeze" had ~20 RC bugs while then Debian 5 "lenny" had a whooping ~900 RC bugs (but don't read too much into it).
  • [131056430050] |
  • packages: Each release of Debian has more packages than the last. [131056430060] |Note that sometimes some packages are removed from a release. [131056430070] |Reasons may include death of software, stability, security, ...
  • [131056430080] |
  • kernels: More often than not, you want a newer kernel, if not for nothing but improved hardware support.
  • [131056440010] |Lock second mouse pointer & keyboard input to one window [131056440020] |This is basically a lite version of multiseat (see my other question): I can setup two mouse pointers (see e.g. here, here or here) and two keyboards (although my two identical logitech K200 keyboards show up as four keyboards in xinput list, any ideas why?). [131056440030] |Only this is not enough to allow two users to do things independently, since apparently only one window can be focused at a time so all keyboard input goes to that window. [131056440040] |Can this behaviour be changed? [131056440050] |Can the inputs of two keyboards be directed to two different windows at the same time? [131056440060] |And can a mouse-locking application be told to only lock one of the pointers? [131056440070] |(in case your answer includes Xephyr, can that support direct open gl rendering?) [131056450010] |USB keyboards often expose multiple interfaces; it is likely that one is the standard 104-key set and another is the extra buttons. [131056450020] |MPWM is a simple window manager that is able to use the MPX extensions to allow for multiple independent client pointers and active windows. [131056450030] |You'll have to wait for similar support to appear in other window managers. [131056450040] |Xephyr currently supports neither direct nor indirect OpenGL rendering. [131056450050] |It's in development. [131056460010] |rsync: does it uses invisible configuration files ? [131056460020] |I've a question about rsync command in unix. [131056460030] |Does it save some configuration invisible files into the directories ? [131056460040] |How does it know which files need to be added or not ? [131056460050] |Or is just checking file by file ? [131056460060] |I'm asking this because I'm planning to copy all files (but not the root directory) to another folder (svn working copy). [131056460070] |thanks [131056470010] |rsync does not use any invisible files to remember what it has/has not copied across. [131056470020] |Each time you run an rsync command it checks source against target (according to the options you specify on your rsync commandline) and copies files accordingly. [131056470030] |If you want to explicity ignore/not copy files/folders you can use the --exclude (and related) option to do that. [131056470040] |Check the rsync manpage for rull details of what is available. [131056480010] |In addition to @Richm's answer Rsync uses timestamps, and size (like git does) by default to tell if a file has changed, and that rsync should update it. you can use the --checksum option to check based on a checksum instead of a timestamp. [131056490010] |Can a bash script be hooked to a file? [131056490020] |I want a script to sleep unless a certain file is modifed/deleted (or a file created in a certain directory, or ...). [131056490030] |Can this be achieved in some elegant way? [131056490040] |The simplest thing that comes to my mind is a loop that sleeps for some time before checking the status again, but maybe there is a more elegant way? [131056500010] |On linux, you can use the kernel's inotify feature. [131056500020] |Tools for scripting can be found there: inotify-tools. [131056500030] |Example use from wiki: [131056510010] |What is the difference between the following kernel Makefile terms: vmlinux, vmlinuz, vmlinux.bin, zimage & bzimage? [131056510020] |While browsing through the Kernel Makefiles, i found these terms. [131056510030] |So i would like to know what is the difference between vmlinux, vmlinuz, vmlinux.bin, zimage &bzimage? [131056520010] |It's all in here: http://en.wikipedia.org/wiki/Vmlinux [131056530010] |vmlinux [131056530020] |This is the Linux kernel in an statically linked executable file format. [131056530030] |Generally, you don't have to worry about this file, it's just a intermediate step in the boot procedure. [131056530040] |The raw vmlinux file may have usage for debugging purposes. [131056530050] |vmlinux.bin [131056530060] |The same as vmlinux, but in a binary file format. [131056530070] |vmlinuz [131056530080] |The vmlinux file usually gets compressed with zlib. [131056530090] |Since 2.6.30 LZMA and bzip2 are also available. [131056530100] |By adding further boot and decompression functionalities to vmlinuz, the image can be used to boot a system with the vmlinux kernel. [131056530110] |The compression of vmlinux can occur with zImage or bzImage. [131056530120] |The function decompress_kernel() handles the decompression of vmlinuz at bootup, a message indicates this: [131056530130] |zImage (make zImage) [131056530140] |This is the old format for small kernels (compressed, below 512KB). [131056530150] |At boot, this image gets loaded low in memory (the first 640KB of the RAM). [131056530160] |bzImage (make bzImage) [131056530170] |The big zImage (this has nothing to do with bzip2), was created while the kernel grew and handles bigger images (compressed, over 512KB). [131056530180] |The image gets loaded high in memory (over 1MB of the RAM). [131056530190] |As today's kernels are way over 512KB, this is mostly the preferred way. [131056530200] |An inspection on Ubuntu 10.10 shows: [131056540010] |Shell output help [131056540020] |What is the output of date -u +%W$(uname)|sha256sum|sed 's/\W//g' (on Arch Linux if it matters)? [131056540030] |How do I find that out? [131056550010] |Displays the current week of the year. [131056550020] |Displays the kernel name. [131056550030] |Generates a SHA-265 Hash Sum. [131056550040] |Cuts out all non-word characters. [131056550050] |The |'s are redirecting the output of the first command to the appending command. [131056550060] |Enter the line in a terminal, f.e. gnome-terminal or xterm: [131056550070] |Depending on the date and the operating system installed, this will output different hashes, like this: [131056560010] |Arch Linux installation- partitions help [131056560020] |While trying to set up partition from Arch Linux installation CD(using cfdisk) itself the following error showed up- [131056560030] |Bad primary partition 2: partition ends in final partial cylinder [131056560040] |How do I fix it? [131056570010] |
  • Fetch sysresccd (skip step if the you don't need any kind of GUI and parted is on arch disk). [131056570020] |It is usefult to have a copy anyway.
  • [131056570030] |
  • Use gparted/parted to partition harddrive (the latter is present IIRC)
  • [131056570040] |
  • Resume installing
  • [131056580010] |Don't confuse the terminal, which is the environment for text mode programs, with the shell, which executes commands. [131056580020] |

    Background

    [131056580030] |In a unix context, a terminal is an environment for text input and output. [131056580040] |Historically, a terminal was a physical device, but these days most terminals are provided by terminal emulators. [131056580050] |If your display is in text mode, this is usually known as a “text console”, or sometimes (somewhat confusingly) as a “virtual terminal”. [131056580060] |A “terminal” can be graphical, but in a unix context there will normally be a qualifier, e.g. “X terminal”. [131056580070] |Most interactive programs run inside terminals are of one of two kinds: [131056580080] |
  • read-eval-print loops are programs that read a line, then execute it. [131056580090] |Unix shells are examples of this.
  • [131056580100] |
  • full-screen text mode programs, such as emacs -nw, lynx, nethack, vi, …
  • [131056580110] |

    Escape sequences

    [131056580120] |A terminal and the program inside it communicate by exchanging text with embedded escape sequences. [131056580130] |When you type a character in a terminal, the program receives that character; if you type a function key, the terminal usually converts it to an escape sequence. [131056580140] |In the other direction, if the program outputs a printable character, the terminal displays it; if the program outputs a control character, it performs a function such as moving the cursor, changing the color, etc. [131056580150] |Most terminals are compatible with Xterm control sequences. [131056580160] |

    Further reading

    [131056580170] |
  • What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?
  • [131056580180] |
  • What protocol/standard is used by terminals?
  • [131056580190] |
  • Is screen useful?
  • [131056580200] |
  • How can I close a terminal without killing the command running in it?
  • [131056590010] |A terminal is an environment for text input/output [131056600010] |How can I make a process I start during an SSH session run after the session has ended? [131056600020] |Possible Duplicate: Keep SSH Sessions running after disconnection. [131056600030] |I have a process which is basically a web-server, I start it during an SSH session. [131056600040] |However, when I leave the session (by closing the PuTTY windows), it stops running and responding to requests. [131056600050] |This is true even if I end the command with a &. [131056600060] |With Apache, I don't have this problem, it comes with a stop, a start, and a restart script. [131056600070] |I'd like to create something like that for this program. [131056600080] |How can I start a process, so that it will continue running even after I end the SSH session I started it in? [131056600090] |Also how can I set it to restart itself if it stops for some reason? [131056600100] |Thanks! [131056610010] |You can make it a daemon (fork it twice or have it started by the system's init daemon) or for temporary stuff use screen. [131056620010] |Start it in a screen session. [131056620020] |Now start the process: [131056620030] |Then, detach the screen session with Ctrl+a d. [131056620040] |You can reattach to the screen session again by typing: [131056620050] |If you have more sessions running you can list them with: [131056630010] |Easy incremental backups to an external hard drive. [131056630020] |For a while I used Dirvish to do incremental backups of my machines, but it is slightly cumbersome to configure, and if you do not carry a copy of your configuration it can be hard to reproduce elsewhere. [131056630030] |I am looking for backup programs for Unix, Linux that could: [131056630040] |
  • Incrementally update my backup
  • [131056630050] |
  • Create "mirror" trees like dirvish did using hardlinks (to save space)
  • [131056630060] |
  • Ideally with a decent UI
  • [131056640010] |Try rsnapshot it uses rsync and hardlinks and is incremental. [131056650010] |I've been using epitome for about a year now for deduplicated backups of my personal data . [131056650020] |It has a tar like interface so it's quite comfortable for a unix user and setup is a breeze, at least, on OpenBSD. [131056650030] |You can easily cron it to backup your directories on a daily basis, and it takes care of the deduplication of your data. [131056650040] |You basically are left with a meta-file that you can use to restore your snapshot at a later date. [131056650050] |As I said the interface is tar-like so doing a backup is as easy as: [131056660010] |This crude -but functional- script will backup everything under the sun to your external hard drive under a hard link farm. [131056660020] |The directory name is a timestamp, and it maintains a symlink to the latest sucessful backup. [131056660030] |Think of it as a Time Machine sans the fancy GUI. [131056660040] |Set it up creating an empty $TARGET and symlink a dummy $TARGET/latest to it. [131056660050] |Populate /etc/backup/rsync.exclude with lost+found, tmp, var/run and everything else you need to skip during backup, or go for --include-from if it fits you better; "man rsync" is your friend. [131056660060] |Proper sanity checks, error control, remote backup and pretty GNOME GUI are left as an exercise to the reader ;-) [131056670010] |I've had some success with RIBS (Rsync Incremental Backup System) [131056670020] |It uses rsync so hardlinks are supported and can do incremental backups hourly, daily, weekly and monthly. [131056670030] |However, it is a script only. [131056670040] |To set up you need to edit the settings and then set up related cronjobs. [131056670050] |It works, but it's not the most user friendly. [131056680010] |BackupPC sounds like it fits the bill. [131056680020] |It manages a tree of hard links for dedupe and can backup many machines, or just the local machine. [131056690010] |Rdiff Backup is really good http://rdiff-backup.nongnu.org/ [131056700010] |I use backintime, which is primarily targeted towards Gnome/KDE desktops. [131056700020] |However, it can work from the commandline as well. [131056700030] |I describe backintime as a backup system with "poor man's deduplication". [131056700040] |If you were to write your own backup script to use rsync and hardlinks, you would end up with something similar to backintime. [131056700050] |
  • I use cron to kick off the backintime job once per night.
  • [131056700060] |
  • As the documentation says: The real magic is done by rsync (take snapshots and restore), diff (check if somethind changed) and cp (make hardlinks).
  • [131056700070] |
  • backintime can be configured with different schedules. [131056700080] |I keep monthly backups for 1 year, weeklies for 1 month, and dailies for 1 week.
  • [131056700090] |
  • backintime uses hardlinks. [131056700100] |I have 130GB worth of data, and I back this up nightly. [131056700110] |It only uses 160GB worth of space on the second drive because of the magic of hardlinks.
  • [131056700120] |
  • Restoring data from the backup location is as simple as running cp /u1/backintime/20100818-000002/backup/etc/rsyslog.conf /etc/rsyslog.conf. [131056700130] |You don't need to use the GUI.
  • [131056700140] |
  • On the second drive, the initial copy was expensive (since you can't do hardlinks between two different filesystems), but subsequent copies are fast.
  • [131056700150] |
  • I copy data from my primary filesystems to a second filesystem onto a second hot-swappable drive, and periodically rotate the secondary drive.
  • [131056710010] |The Backup-Comparison of backup tools at the Ubuntu-Stackexchange is not really Ubuntu-specific. [131056710020] |Perhaps you get some suggestions there. [131056710030] |I recommend DAR - the Disk ARchive program. [131056710040] |It does not come with a GUI, but its config is easy to reproduce. [131056710050] |It has great incremental backup support. [131056710060] |It does not use hardlink mirror trees, but it has a convenient shell for navigating the filesystem view of different snapshots. [131056720010] |A process is an instance of a running computer program. [131056720020] |

    Obtaining information about processes

    [131056720030] |

    Some useful tools

    [131056720040] |
  • top, htop: text-mode system monitors, showing process information in real time
  • [131056720050] |
  • lsof: list process open files. [131056720060] |Also netstat specifically for network connections.
  • [131056720070] |
  • ptrace(): a programming interface to see all the system calls that a process is making. [131056720080] |Different systems have different command line tools: strace on Linux, ktrace on *BSD, truss on Solaris, dtrace on FreeBSD and Mac OS X, …
  • [131056720090] |

    Further reading

    [131056720100] |
  • What is using this network socket?
  • [131056720110] |
  • How to find which processes are taking all the memory?
  • [131056720120] |
  • Monitor one process
  • [131056720130] |

    Keeping processes running

    [131056720140] |

    Some useful tools

    [131056720150] |
  • screen screen: run programs in a detachable terminal that you can reattach to later from a different place
  • [131056720160] |
  • cron cron: schedule a task at regular intervals. [131056720170] |Also at for a one-off.
  • [131056720180] |

    Further reading

    [131056720190] |
  • Keep SSH Sessions running after disconnection.
  • [131056720200] |
  • How can I pause up a running process over ssh, disown it, associate it to a new screen shell and unpause it?
  • [131056720210] |
  • Running continuous jobs remotely.
  • [131056720220] |

    Other topics

    [131056720230] |

    Further reading

    [131056720240] |
  • Is there a way to limit the amount of memory a particular process can use in Unix?
  • [131056720250] |
  • Is there a way to intercept interprocess communication in Unix/Linux?
  • [131056730010] |A process is an instance of a computer program that is being executed [131056740010] |If you're just looking for a program to accomplish a common task, you can probably find it yourself. [131056740020] |See Where can I find Software for Unix/Linux that does X? [131056740030] |If you have more particular requirements, use this tag, plus one or more other tags indicative of the general topic the software is about. [131056740040] |If you're looking for a unix equivalent to a Windows program, don't assume your readers will have heard of the Windows program. [131056740050] |Avoid subjective requirements like “best”. [131056750010] |recommendations for software for a particular purpos [131056760010] |Non-Root Package Managers [131056760020] |From my research, I seem to notice that all package managers insist on being used as a privileged user and must be installed into /. [131056760030] |Typically, what I like to do is create a throwaway account, compile some software, and install to $HOME for that account. [131056760040] |I can try a variety of setups and then when I'm done, just destroy the account. [131056760050] |However, compiling software becomes tedious. [131056760060] |My experience is really just limited to yum, but I don't understand why I wouldn't be able to drop a repo file into ~/etc/yum.repos.d and have yum install everything into a home account. [131056760070] |Is there any reason why package managers must be used as a privleged user to install software? [131056770010] |First of all it is due to dependencies. [131056770020] |Some packages may not be installed by user - like PolicyKit. [131056770030] |Therefore it would require additional burden on packager who donate their free time and usually installing program is as easy as typing sudo (single-user station) or nagging administrator. [131056770040] |There are options for installing in $HOME [131056770050] |
  • Language primitive 'package managers' usually supports it out of box (like gem for Ruby or cabal for Haskell) or with small tweaking (I forgot name for python)
  • [131056770060] |
  • Good old ./configure --prefix=$HOME/sandbox --enable-cool-feature &&make all install (or varitations like jhbuild).
  • [131056770070] |
  • There was program to install at $HOME few years ago. [131056770080] |However I cannot find it - I guess nearly noone used it as they either installed them themselves or nag administrators.
  • [131056780010] |Binary packages are compiled with the assumption that they will be installed to specific locations in /. [131056780020] |This is not always easily changed, and it would take additional QA effort (which is difficult enough in the first place!) to determine whether specific binaries are or aren't relocatable. [131056780030] |To an extent, you can use things like fakechroot to create an entire system in a subdirectory as a non-root user, but this is tedious and fragile. [131056780040] |You will have better luck with source packages. [131056780050] |Gentoo Prefix and Rootless GoboLinux are both package managers that can install to non-/ locations and may be usable by non-root users. [131056790010] |My experience is really just limited to yum, but I don't understand why I wouldn't be able to drop a repo file into ~/etc/yum.repos.d and have yum install everything into a home account. [131056790020] |The mainstream Linux package managers view the world as a sysadmin would ... where the machine is a single entity. [131056790030] |This allows you to get answers to questions like "what outstanding errata apply to system X" and "how do system X and system Y differ". [131056790040] |This also allows yum to have "a history" which is usable, have rpmdb versions and do things like "yum --security update" etc. [131056790050] |There are some package managers, like zero-install, which try to view the world as a user would ... ie. what applications do I have access to. [131056790060] |You might think the later is a better model, but IMNSHO there is a reason you haven't heard of zero-install but have heard of yum. [131056800010] |There's a package manager project--Nix--with an interesting foundational idea (a "functional" pkg manager), which also supports a per-user operation: [131056800020] |Multi-user support [131056800030] |Starting at version 0.11, Nix has multi-user support. [131056800040] |This means that non-privileged users can securely install software. [131056800050] |Each user can have a different profile, a set of packages in the Nix store that appear in the user’s PATH. [131056800060] |If a user installs a package that another user has already installed previously, the package won’t be built or downloaded a second time. [131056800070] |At the same time, it is not possible for one user to inject a Trojan horse into a package that might be used by another user. [131056800080] |A NOTE I WANT TO ADD: Nix ought to be usable in a Unix-like system of your choice (e.g., a Linux distro). [131056800090] |There are also an associated large collection of packages that can be installed with the Nix package manager--Nixpkgs--built for a number of platforms: [131056800100] |
  • GNU/Linux on 32-bit and 64-bit x86 (i686-linux and x86_64-linux)
  • [131056800110] |
  • Mac OS X (i686-darwin and x86_64-darwin)
  • [131056800120] |
  • FreeBSD (i686-freebsd and x86_64-freebsd)
  • [131056800130] |
  • OpenBSD (i686-openbsd)
  • [131056800140] |
  • Windows / Cygwin (i686-cygwin),
  • [131056800150] |and an associated distro--NixOS: [131056800160] |NixOS is a Linux distribution based on Nix. [131056800170] |It uses Nix not just for package management but also to manage the system configuration (e.g., to build configuration files in /etc). [131056800180] |This means, among other things, that it’s possible to easily roll back the entire configuration of the system to an earlier state. [131056800190] |Also, users can install software without root privileges. [131056800200] |Read more… [131056800210] |and an associated "continuous" build system--Hydra. [131056810010] |How to monitor CPU/memory usage of a single process? [131056810020] |I would like to monitor one process's memory / cpu usage in real time. [131056810030] |Similar to top but targeted at only one process, preferably with a history graph of some sort. [131056820010] |top actually supports focusing on a single process, although it naturally doesn't have a history graph: [131056830010] |I normally use following two : [131056830020] |
  • HP caliper : its very good tool for monitoring processes it you can check call graph and other low level information also. [131056830030] |But please note its free only for personal use.
  • [131056830040] |
  • daemontools : a collection of tools for managing UNIX services
  • [131056840010] |htop is a great replacement to top. [131056840020] |It has... [131056840030] |Colors! [131056840040] |Simple keyboard shortcuts! [131056840050] |Scroll the list using the arrow keys! [131056840060] |Kill a process without leaving and without taking note of the PID! [131056840070] |Mark multiple processes and kill them all! [131056840080] |Among all of the features, the manpage says you can press F to follow a process. [131056840090] |Really, you should try htop. [131056840100] |I never started top again, after the first time I used htop. [131056850010] |Release a port owned by third party application? [131056850020] |I have been consulted about any way to release a port owned by a third party application that is listening to it. [131056850030] |Say the application uses sockets and is listening for an specific port the root now want to claim back without closing the application. [131056850040] |I've been unable to give an answer. [131056850050] |Is there any way to close the handle to the port from the terminal or with an specific API (say the root can write and run C++) that can accomplish this? [131056860010] |There's no clean way to close an open file (network port or otherwise) in an application that doesn't expect it. [131056860020] |There is a way to close the file under its nose, but the application might not react well. [131056860030] |There's a good chance it will crash, which would defeat the purpose. [131056860040] |You can execute a system call in a remote process with the ptrace system call. [131056860050] |Use lsof or netstat to find the file descriptor you're interested in. [131056860060] |Then attach your favorite debugger to the process in question and make it execute a close (or shutdown) system call. [131056860070] |As this has a good chance of crashing the application, because its interface with its environment will no longer match its internal data structure, consider other approaches. [131056860080] |In particular, if the purpose is to have a different application listening on a UDP or TCP port, you could redirect traffic to a different port at the level of the network layer (iptables under Linux, pfctl under BSD, …). [131056870010] |Connect to byobu screen session and execute command? [131056870020] |In a script I am building I'm experimenting with how to automate as much as possible. [131056870030] |One of the more interesting challenges is to connect to a byobu screen session and execute a command. [131056870040] |So I started where I thought things would be: how many screen sessions there are (game has 4 windows in byobu and lordquackstar has 2 windows) [131056870050] |So I started in the obvious place, looking on how many screen sessions there are (game has 3 windows in byobu and lordquackstar has 2. [131056870060] |The users are in separate putty instances) [131056870070] |Only one there, so I checked for the system [131056870080] |Still no multiple screens [131056870090] |So for my question: How can I connect to a window in byobu from a script? [131056870100] |On a slightly related note, once I connect to it from a bash script, is there any way to send it a command then detatch? [131056880010] |You can directly attach to a previously detached byobu/screen session including the window: [131056880020] |will reattach into window 2 (or a named one). [131056880030] |-X can send any command to a byobu/screen session and also works with the -p switch. [131056880040] |This will send a uname -a to the second (third actually) byobu window, the echo at the end sends a carriage return so the commands gets executed. [131056890010] |You can send a command to a particular screen window of a particular screen session without attaching to it. [131056890020] |The session name is set with the -S option when starting screen or the sessionname command; by default it's byobu with byobu. [131056890030] |You can also use the screen PID after -S. [131056890040] |You can set a window's name with the title command. [131056890050] |You can also use the window number. [131056900010] |Permissions issues with mounting remote server into a specific folder [131056900020] |I'm doing the following to mount a remote server to a specific path on my server: [131056900030] |However when I mount the server the folder permissions change (they become 700), and when I test my rsnapshot.conf file I get the following error: [131056900040] |What am I doing wrong ? should I mount the remote server with another user ? [131056910010] |FUSE has options to control who has access to the files. [131056910020] |I'm guessing you want sshfs -o allow_other. [131056920010] |How can I search for a file with fixed name length using ls? [131056920020] |In a directory, I have files like [131056920030] |etc [131056920040] |How can I get a listing of only the first four files? [131056930010] |There are multiple methods: [131056930020] |

    ls only

    [131056930030] |

    ls and egrep

    [131056930040] |

    find

    [131056940010] |As pointed out by SiegeX, Shell alone does not understand regular expressions. [131056940020] |If you want a precise filter of your files, you must use regular expressions and hence use a command like egrep. [131056940030] |Here, the files you want to list begin with lazer_ and are followed only by some digits (possibly more or less that 6). [131056940040] |I would do it this way: [131056940050] |This regex works the same as '^lazer_[0-9]*$'. [131056940060] |Regular expressions with egrep also handles repetition just like in the answer of wag, if you want to restrict your list to files ending with exactly 6 digits: [131056950010] |How to find out how much memory a program taking [131056950020] |I use htop to watch my processes, but most of services ran are multi threaded, so they are shown on several lines with a % of memory use which in some cases is the same for all, sometimes it varies among them. [131056950030] |Say in the case of Firefox it can list 15 processes each consuming 13% of memory. [131056950040] |Can I know hoc much memory is Firefox, MySQL, some fcgi script or anything is consuming? since the htop output seems misleading. [131056960010] |Most top implementations have a way to turn the display of threads on or off. [131056960020] |
  • htop: in the “Setup / Display options” menu, “Hide userlands threads”.
  • [131056960030] |
  • Linux top: press H to toggle the showing of threads (but they're off by default).
  • [131056960040] |
  • OpenBSD top: press T to toggle the showing of threads (but they're off by default).
  • [131056960050] |Note that memory mappings, and hence memory occupation, is a property of a process, so you'll always see the same numbers for every thread in a process. [131056960060] |If you see different numbers, it means there are multiple processes. [131056960070] |There's no easy way to find out the total memory consumption of a set of processes because the concept isn't well-defined. [131056960080] |Some of the memory may be shared; this happens all the time with shared libraries, and in addition related processes (such as multiple instances of a server) are more likely to use shared memory to exchange data. [131056960090] |If you just add the figures, you'll often get a number that's a lot larger than the actual used memory. [131056970010] |You could use this: http://www.pixelbeat.org/scripts/ps_mem.py [131056980010] |ls is a command to list files and their metadata (time, size, owner, etc.). [131056980020] |Some implementations can show files in color. [131056980030] |ls does not expand wildcards: this is done by the shell. [131056980040] |It is very nearly always a mistake to use ls in a script. [131056980050] |If you're just expanding wildcards, the shell does it already. [131056980060] |If you're looking for files based on criteria such as their size or time, use find (or zsh's glob qualifiers if you have zsh). [131056980070] |

    Further reading

    [131056980080] |
  • Is there any option with 'ls' command that I see only the directories?
  • [131056980090] |
  • what does the @ mean in ls -l?
  • [131056980100] |
  • ls command: what does the first line mean?
  • [131056990010] |ls - list directory content [131057000010] |What's the progress regarding improving system performance/responsiveness during high disk I/O? [131057000020] |Whenever there is high disk I/O, the system tends to be much slower and less responsive than usual. [131057000030] |What's the progress on Linux kernel regarding this? [131057000040] |Is this problem actively being worked on? [131057010010] |I think for the most part it has been solved. [131057010020] |My performance under heavy IO has improved in 2.6.36 and I expect it to improve more in 2.6.37. [131057010030] |See these phoronix Articles. [131057010040] |Wu Fengguang and KOSAKI Motohiro have published patches this week that they believe will address some of these responsiveness issues, for which they call the "system goes unresponsive under memory pressure and lots of dirty / writeback pages" bug. [131057010050] |Andreas Mohr, one of the users that has reported this problem to the LKML and tested the two patches that are applied against the kernel's vmscan reported success. [131057010060] |Andreas' problem was the system becoming fully unresponsive (and switching to a VT took 20+ seconds) when making an EXT4 file-system when a solid-state drive was connected via USB 1.1. [131057010070] |On his system when writing 300M from the /dev/zero file the problem was even worse. [131057010080] |Here's a direct link to the bug [131057010090] |Also from Phoronix [131057010100] |Fortunately, from our testing and the reports of other Linux users looking to see this problem corrected, the relatively small vmscan patches that were published do seem to better address the issue. [131057010110] |The user-interface (GNOME in our case) still isn't 100% fluid if the system is sustaining an overwhelming amount of disk activity, but it's certainly much better than before and what's even found right now with the Linux 2.6.35 kernel. [131057010120] |There's also the Phoronix 2.6.36 release announcement [131057010130] |It seems block barriers are going away and that should also help performance. [131057010140] |In practice, barriers have an unpleasant reputation for killing block I/O performance, to the point that administrators are often tempted to turn them off and take their risks. [131057010150] |While the tagged queue operations provided by contemporary hardware should implement barriers reasonably well, attempts to make use of those features have generally run into difficulties. [131057010160] |So, in the real world, barriers are implemented by simply draining the I/O request queue prior to issuing the barrier operation, with some flush operations thrown in to get the hardware to actually commit the data to persistent media. [131057010170] |Queue-drain operations will stall the device and kill the parallelism needed for full performance; it's not surprising that the use of barriers can be painful. [131057010180] |There's also this LWN article on fair I/O Scheduling [131057010190] |I would say IO reawakened as a big deal about the time of the release of ext4 in 2.6.28. [131057010200] |The following links are to Linux Kernel Newbies Kernel releases, you should review the Block, and Filesystems sections. [131057010210] |This may of course be unfair sentiment, or just the time I started watching FS development, I'm sure it's been improving all along, but I feel that some of the ext4 issues, 'caused people to look hard at the IO stack, or it might be that they were expecting ext4 to resolve all the performance issues, and then when it didn't they realized they had to look elsewhere for the problems. [131057010220] |2.6.28, 2.6.29, 2.6.30, 2.6.31, 2.6.32, 2.6.33, 2.6.34, 2.6.35, 2.6.36, 2.6.37