[131070980010] |EGLIBC vs GLIBC on laptops [131070980020] |From what I understand the primary cause of fork/branch of EGLIBC as compared to GLIBC was the problem with patches for ARM that Debian devs wanted to push upstream. [131070980030] |I have slightly different question - is there any significant difference for relatively fat (as compared to embedded platforms) x86-64 machines? [131070990010] |The main goals of EGLIBC (described http://www.eglibc.org/mission) are all targeting embedded platforms: [131070990020] |
  • Provide options to reduce on-disk and in-memory footprint: As long as you want to keep binary compatibility to glibc these options are useless or have nearly no effect on x86-64.
  • [131070990030] |
  • Support cross-compilation and cross-validation: Well... [131070990040] |Who does cross-compiling for x86-64...
  • [131070990050] |
  • Support processors used in embedded systems: I guess you don't run "x86-64" embedded systems, so it's useless, too.
  • [131070990060] |
  • Incorporate support for processor-specific functionality where appropriate: glibc already has full optimizations for x86
  • [131070990070] |
  • Retain API and ABI compatability with GLIBC wherever feasible.
  • [131070990080] |I wouldn't suggest using eglibc unless your distribution fully supports it, otherwise you may sometimes break linking when full ABI compatibility is not possible. [131070990090] |Though, many developers criticize the development-style of glibc and so non-technical reasons may favor eglibc over glibc. [131070990100] |API and ABI compatibility between eglibc and glibc disallow big changes and all x86-64 optimizations are probably merged into glibc, too. [131071000010] |How to install fonts for X? [131071000020] |I would like to setup a Linux system with ScrotWM as the window manager, but I noticed that X is aware of only a few fonts. [131071000030] |I would like to have UTF-8 fonts that support multiple languages, including Asian languages like Japanese and Traditional Chinese. [131071000040] |How do I install fonts so that X can show them? [131071000050] |What kind of fonts would they be? [131071000060] |And will the fonts be universally available to other programs like Firefox or OpenOffice once I install them? [131071000070] |Or will applications maintain separate groups of fonts for their own use? [131071000080] |Thanks. [131071010010] |To add fonts while X server is running use command: [131071010020] |And, if needed: [131071010030] |To make this permanent add FontPath line to your xorg.conf [131071020010] |You can download fonts (i.e TTF files) then put them in ~/.fonts (for personal use) or /usr/share/fonts (for everyone). [131071020020] |After that X should pick up the font list and make it available to all applications (you may have to log out and log back in first). [131071030010] |How to make sendmail write to new maillog [131071030020] |I want to rotate my maillog, but I want to make sure the newly rotated log contains the last 2M lines from the previous log: [131071030030] |This all works, except for the fact that sendmail continues to write to the backup log file "maillog.yyyymmdd" instead of the proper "maillog"! [131071030040] |What is the correct way to do this without having to stop sendmail first, rotate, then start again? [131071040010] |Usually maillog written by syslogd, not by sendmail itself, so you should send SIGHUP to syslogd [131071050010] |Fixing CTRL-* in vim under GNU screen [131071050020] |When running vim under GNU screen, I'm finding that combinations of CTRL with the arrow and Pg* keys don't work as expected. [131071050030] |I'm using the Ubuntu 10.10 vim-gnome package. [131071050040] |On a different machine, also running Ubuntu, this did work without problems; unfortunately I don't have that configuration available to me now. [131071050050] |There is a related question here: How to fix Ctrl + arrows in Vim? [131071050060] |However, the suggested solution there is to remap vim's keybindings to work with the terminal emulator, in that case PuTTY. [131071050070] |I don't recall doing anything of the sort, and suspect that there is a screen configuration option which will resolve this issue. [131071050080] |There's also a thread on the gnu-screen mailing list which suggests that running vim via $ TERM=xterm vim is an appropriate fix or workaround. [131071050090] |This does work, but I'm a bit concerned that there might be side effects. [131071050100] |It also doesn't sound familiar enough to be the solution I set up on the other machine (if a solution was necessary). [131071060010] |As intuited stated in his update, adding term xterm to the ~/.screenrc file seems to fix this problem. [131071070010] |There are a couple of other ways to set the terminal which work in running processes: [131071070020] |
  • In a running screen instance, pressing ^A-: and issuing the command term xterm will cause newly opened screens under that instance to start with their $TERM environment variable set to xterm; this will in turn propagate to invoked vim instances. [131071070030] |These vim instances will display proper behaviour with regard to CTRL-combos; I've not yet discovered any side effects of this strategy. [131071070040] |This command does not affect existing screens. [131071070050] |This command can of course be used in a ~/.screenrc file, so it's possible that this method was used on the other machine.
  • [131071070060] |
  • In a running vim instance, the command set term=xterm will make CTRL-combos work in that vim instance. [131071070070] |This has the side effect of disconnecting the X clipboard (i.e. @* and @+) for reasons that I do not yet understand. [131071070080] |Interestingly, the clipboard side effect also happens when the command :set term=screen is executed in a vim instance started with $TERM=xterm.
  • [131071080010] |Is there a GUI alternative to GParted? [131071080020] |GParted 0.7.0 is less than ideal, so am looking for something else. [131071080030] |Is there a credible alternative? [131071080040] |[update] The two issues I was having had nothing to do with GParted. [131071090010] |Unless something changed recently GParted is de facto standard tool. [131071090020] |The only viable alternative are partially CLI partially GUI tools like cfdisk. [131071100010] |Differentiating between Hard and Soft Dependencies [131071100020] |I will ask this with an example - [131071100030] |I have installed gnash-plugin on fedora 64 bit with Yum. [131071100040] |It pulled in following packages - [131071100050] |Now, I tested the plugin and I didn't like it. [131071100060] |I want to remove all these above packages which got installed with the plugin as I don't longer going to need them. [131071100070] |How can I do this? [131071100080] |I checked remove-with-plugin for yum but it pulls in all the packages which are currently depending on the packages. [131071100090] |I understand the thought process behind showing what packages are getting affected - but I am wondering if there is any way of looking at the history with what package got installed when I installed a certain package. [131071100100] |When gnash-plugin wasn't there firefox was running fine with but after I installation firefox is now depends on this new plugin. [131071100110] |Has any one worked on differentiating hard-dependencies(hard means the program will break if that package is not there) and soft-dependencies ( soft means the program may not get affected fatally) ? [131071110010] |In Ubuntu/Debian land we implement "hard/soft" dependencies by having actual Depends but also Recommends. [131071110020] |We also have Suggests which are even softer soft dependencies. [131071120010] |If you are on supported version of Fedora, you can just do: [131071120020] |...and then: [131071120030] |...if you get the very latest yum (Eg. from the yum-rawhide rebuild repo. on repos.fedorapeople.org) then you can also do: [131071130010] |Debian and derivatives have hard/medium/soft dependencies, but that doesn't solve your problem. [131071130020] |APT, the Debian equivalent of Yum, distinguishes between manually installed and automatically installed packages, which solves your problem (automatically installed packages are removed if no manually installed package depends on them). [131071130030] |I don't know if this feature has been ported to Yum. [131071140010] |How can I rename photos, given the EXIF data? [131071140020] |Let's say I have a bunch of photos, all with correct EXIF information, and the photos are randomly named (because of a problem I had). [131071140030] |I have a little program called jhead which gives me the below output: [131071140040] |Now I need to rename all the photos in the folder in the next format: [131071140050] |Where the minor number would be the older image, and the maximum the newer. [131071140060] |I'm not so good scripting, so I'm asking for help. [131071140070] |I think a bash script is enough, but if you feel more comfortable, you can write a python script. [131071140080] |I thought in something like: [131071140090] |but I don't know how to do that for all the files at once. [131071140100] |Thanks in advance! [131071150010] |You can to it for all files using a for loop (in the shell/in a shell-script): [131071150020] |This is just a very basic outline. [131071150030] |Delete echo when you have verified that everything works as expected. [131071160010] |Just found out here that jhead can do it all for you! :) [131071170010] |What's the best way to partition your drive? [131071170020] |I usually install Linux on a single partition since I only use it as a personal desktop. [131071170030] |However every now and then I reinstall the box and what I simply do is to move my files around with an external hard disk. [131071170040] |So, how could I prevent that when reinstalling my box (e.g. switching to another distro)? [131071180010] |Depends on the usage, and the OS really. [131071180020] |On my main desktop I have the space split between / and another partition I keep my documents/music etc. [131071180030] |Since /home will have user configuration and stuff in there I wouldn't keep it intact between installs, just symlink my document/music folders into my homedir. [131071190010] |Keep your /home on a separate partition. [131071190020] |This way, it will not be overwritten when you switch to another distro or upgrade your current one. [131071190030] |It's also a good idea to have your swap on its own partition. [131071190040] |But that should be done automatically by your distro's installer. [131071190050] |The way my laptop is setup, I have the following partitions: [131071190060] |/ /home /boot swap [131071200010] |There are a number of guides that can help with this, and as theotherreceive pointed out, it can be OS specific. [131071200020] |What Solaris suggests may not be what Ubuntu suggests. [131071200030] |For instance, Solaris (and maybe HP-UX) use /export/home as the mount point for home dirs, Linux uses /home. [131071200040] |There's no real magic to it, in fact what I'd say is you've hit the nail on the head. [131071200050] |One partition doesn't cut it for your needs. [131071200060] |So make a change. [131071200070] |Use the guides as an example (you can even learn why /etc is /etc and other neat trivia with the right document). [131071200080] |Here's an example (pulled at random from a Google search): [131071200090] |http://content.hccfl.edu/pollock/aunix1/partitioning.htm [131071210010] |The minimum setup should have / and /home in separate partitions. / should have at least 18GB, in my experience. [131071210020] |I usually have a third partition called /code where I keep all my work code, and use /home for downloads, documents and other non-code related things. [131071210030] |When I reinstall, I just backup whatever I have in /home that I want to keep to /code. [131071220010] |Rather than relying just on partitions, under Linux I usually recommend using LVM. [131071220020] |This allows you to grown and shrink filesystems as needed later (though growing tends to be easier than shrinking), even wile the file system is active in the case of ext2/3/4 (I've successfully increased the size of filesystems while they were active, though I've never tried to decrease the size of one this way). [131071220030] |Obviously this still leaves you to decide how to apportion the space, but you don't have to get it exactly right from the get go because rearranging things later is easier. [131071220040] |I usually give each volume as much space as I think it will need plus a chunk for good luck, and leave the remaining space for adding new filesystems or expanding existing ones into later. [131071230010] |No one seems to have mentioned /usr/local. [131071230020] |I often make this a separate partition. [131071230030] |If you're in the habit of installing software compiled from source or other 3rd party software that isn't part of your distro, having it in /usr/local means two things: [131071230040] |
  • You can share /usr/local over nfs and have "installed once, installed everywhere" functionality if you have more than one system with the same OS version
  • [131071230050] |
  • You can keep this partition across re-installs and then you don't need to re-install everything here. [131071230060] |Note: when you upgrade the OS some of your libraries may change and you might have to recompile some things. [131071230070] |But not always.
  • [131071230080] |Besides /usr/local I also keep a separate /home for obvious reasons and a separate /Files which is where I put stuff that is meant to be "shared", such as the family MP3 and video collection. [131071230090] |Depending on your usage /home might be a better place for your music, but if it's in a separate partition it's easy to have it on a separate hard disk even if you don't use LVM. [131071240010] |What Unix commands can be used as a semaphore/lock? [131071240020] |I want to run multiple Bash shell scripts in parallel. [131071240030] |However, I want to avoid race conditions. [131071240040] |What Unix commands are truly atomic that I could use for this purpose, and how can I use them? [131071250010] |Maybe the lockfile command will do what you need. [131071260010] |lockfile(1) looks like a good candidate, though beware that it's part of the procmail package, which you may not have installed on your machine yet. [131071260020] |It's a popular enough package that it should be packaged for your system if it's not installed yet. [131071260030] |Three of the four systems I checked have it, and the other has it available. [131071260040] |Using it is simple: [131071260050] |The options I've given make it retry once a second for up to 15 seconds. [131071260060] |Drop the "-r" flag if you want it to wait forever. [131071270010] |If lockfile is not installed on your system, then mkdir will do the work: it's an atomic operation, and it fails if the directory is already existing (as long as you don't add the -p command-line switch). [131071280010] |flock(1) [131071280020] |This ensures that code between "(" and ")" is run only by one process at a time and that the process does wait for a lock too long. [131071290010] |The system call mkdir() is atomic on POSIX filesystems. [131071290020] |So, using the mkdir command in such a way that it involves exactly one call to 'mkdir()' would achieve your purpose. [131071290030] |(IOW, don't use mkdir -p). [131071290040] |The corresponding unlock is rmdir of course. [131071290050] |Caveat emptor: mkdir() might not be atomic on network filesystems. [131071300010] |Uploading directories with sftp? [131071300020] |I'm having some trouble uploading directories(which contain other directories a few levels deep) by sftp. [131071300030] |I realize I could work around this by gzipping, but I don't see why that's necessary. [131071300040] |Anyway, I try [131071300050] |I think the last error message is completely stupid. [131071300060] |So the directory doesn't exist? [131071300070] |Why not create the directory? [131071300080] |Is there anyway around this issue with sftp, or should I just use scp? [131071310010] |May I suggest a somewhat complicated answer, without zipping, but including tar? [131071310020] |Here we go: [131071310030] |This will pack the directory ./bin with tar (-cf:=create file), filename - (none, stdout) and pipe it through the ssh-command to target.org (which might as well be an IP) where the command in quotes is performed, which is: cd to blah, and tar -xf (extract file) - none, no name, just stdin. [131071310040] |It's as if you pack a package at home, bring it to the post, then drive to work, where you expect the package and open it. [131071310050] |Maybe there is a much more elegant solution which just uses sftp. [131071320010] |It looks like there's no option for this with the OpenSSH SFTP client. [131071320020] |You could use the PuTTY psftp command line tool instead, as its put and mput commands accept a -r (recursive) flag. [131071320030] |It's in the putty-tools package under Debian (and most likely Ubuntu). [131071320040] |Alternately, Filezilla will do what you want, if you want to use a GUI. [131071330010] |You might be interested in using rsync instead. [131071330020] |The command for that would be [131071330030] |This will copy everything in bin/ and place it in on the remote server in /home/earlz/blah/bin/. [131071330040] |As an added benefit, it will first check to see if the file on the remote side hasn't changed, and if it hasn't, it won't re-send it. [131071330050] |Additionally, you can add a -z option and it will compress it for you. [131071340010] |What are the cons of posting dmidecode output publicly? [131071340020] |I remember somewhere mentioned that posting dmidecode output publicly is risky (security/privacy?). [131071340030] |Can you shed some light into this? [131071350010] |dmidecode contain private information: [131071350020] |
  • Serial Number
  • [131071350030] |
  • IPMI support
  • [131071350040] |but this information does not make your system vulnerable. [131071360010] |How to loop over the lines of a file? [131071360020] |Say I have this file: [131071360030] |This program [131071360040] |outputs [131071360050] |I'd like to have the for iterate over each line individually ignoring whitespaces though, i.e. the last two lines should be replaced by [131071360060] |Using quotes for i in "$(cat $1)"; results in i being asigned the whole file at once. [131071360070] |What should I change? [131071370010] |With for and IFS: [131071370020] |Or with read (no more cat): [131071380010] |Why do we use su - and not just su? [131071380020] |I don't understand why su - is preferred over su to login as root. [131071390010] |su - logs you in completely as root, whereas su makes it so you are pretending to be root. [131071390020] |The most obvious example of this is that ~ is root's home directory if you use su -, but your own home directory if you use su. [131071390030] |Depending on your system, it may also mean differences in prompt, PATH, or history file. [131071390040] |So if you are part of a team administering a system, and your colleague gives you a command to run, you know it will work the same if you are both using su -, but if you are both using su, there may be differences due to you having different shell configurations. [131071390050] |On the other hand, if you want to run a command as root but using your own configuration, then maybe su is better for you. [131071390060] |Also don't forget about sudo, which has a -s option to start a shell running as root. [131071390070] |Of course, this has different rules as well, and they change depending on which distribution you are using. [131071400010] |su - invokes a login shell after switching the user. [131071400020] |A login shell resets most environment variables, providing a clean base. [131071400030] |su just switches the user, providing a normal shell with an environment nearly the same as with the old user. [131071400040] |Imagine, you're a software developer with normal user access to a machine and your ignorant admin just won't give you root access. [131071400050] |Let's (hopefully) trick him. [131071400060] |Now, you ask your admin why you can't cat the dummy file in your home folder, it just won't work! [131071400070] |If your admin isn't that smart or just a bit lazy, he might come to your desk and try with his super-user powers: [131071400080] |Wow! [131071400090] |Thank super admin! [131071400100] |He, he. [131071400110] |You've maybe noticed, that the corrupted $PATH variable was not reset. [131071400120] |This wouldn't had happened, if the admin invoked su - . [131071410010] |Also worth knowing, although not an answer: There is also su --, which behaves like su -, but does not change the current directory. [131071420010] |How to copy files nested in directories that match a pattern? [131071420020] |Hi there, [131071420030] |I'm looking to copy files from sub directories that match this pattern [131071420040] |into a folder [131071420050] |I'm thinking that this could be done severals ways, but wanted to see if a guru could come up with something slick! [131071420060] |Thanks in advance :-) [131071430010] |Sounds pretty easy: [131071430020] |Or if the first * should match a whole subtree, use something like: [131071440010] |How to search for packages that are no longer available for installation? [131071440020] |I can't re-install packages that fulfill at least one of the following criteria: [131071440030] |
  • Custom-made packages
  • [131071440040] |
  • Packages created by alien package (e.g. alien --install pkg.rpm)
  • [131071440050] |
  • Packages installed from some repository that is no longer available
  • [131071440060] |
  • Packages installed from some repository, but is no longer available there
  • [131071440070] |
  • Packages whose repositories have been removed from "/etc/apt/sources.list"
  • [131071440080] |How do I list any of these packages? [131071450010] |You should be able to list packages installed via alien just by running rpm -qa. [131071450020] |For Debian packages, maybe some combination of apt-cache policy and apt-get autoclean can help? [131071460010] |Aptitude lists them under “Obsolete and Locally Created Packages”. [131071460020] |The corresponding search pattern is ?obsolete or ~o. [131071470010] |What is the meaning of Scratchbox2 ? [131071470020] |hi, [131071470030] |Somewhere i heard the term 'Scratchbox2'. [131071470040] |Why this is used? what is the exact meaning of this? [131071470050] |What is the advantages over Scratchbox 1? [131071480010] |Scratchbox is cross-compilation suite. [131071480020] |It looks like Scratchbox 2 (sbox2) is new version of Scratchbox 1, maintained by Nokia for Maemo development. [131071480030] |You can use it for general cross-compilation too. [131071480040] |Or to put it better way: fork of Scratchbox 1. [131071480050] |Main advantege of Scratchbox 2 is more elegant approach on some corner cases. [131071480060] |Scratchbox 2 also performs better, as it uses native tools whenever possible, and otherwise emulation. [131071480070] |For Scratchbox 2 you don't have to modify any host tools. [131071480080] |Scratchbox 1 may require some changes and recompilations. [131071490010] |what is the difference between cross-compiling and native compiling [131071490020] |What is the exact difference between cross-compiling and native compiling? [131071500010] |You use a cross compiler to produce executables (or objects) for a platform other than the local host. [131071500020] |The native compiler only produces native binaries. [131071510010] |Cross compiling is compiling something for different CPU type than the one you are running on. [131071510020] |An example is compiling ARM binaries under an i386 system, or compiling 64-bit executables under a 32-bit system. [131071510030] |You normally won't be able to run what you've just compiled when you cross compile it, until you ship the binaries to the system they belong to. [131071510040] |Native compiling is when you compile for the same architecture you're running under, which is the normal situation. [131071520010] |Has OpenBSD's ntpd replacement been ported to other BSDs? Or Linux? [131071520020] |I know the OpenBSD team wrote a from-scratch replacement for ntpdOpenNTPD—that is available via pkgsrc/ports. [131071520030] |My question is more about the base system: Is OpenNTPD the default ntpd in the base system in NetBSD or FreeBSD? [131071520040] |For that matter, what about Linux distros, some of which (Gentoo or Arch, for example) are more "BSDish"? [131071520050] |Thanks. [131071530010] |I just looked at the FreeBSD source for the non-ports version of ntpd, the FreeBSD ntpd is the ntp.org version. [131071530020] |There's an openntpd in ports though. [131071530030] |It looks like NetBSD has a package for ntpd and openntpd. [131071530040] |I can't see any reason why you couldn't run OpenNTPd on any Linux distro. [131071530050] |It might make sense, considering the weird licensing that exists on the current ntp.org ntpd (which is mostly BSD, with a touch of MIT, GPL and public licenses.) [131071540010] |openntpd is included in Ubuntu - since Dapper (8.10 that is) [131071540020] |So, it must be the same for Debian. [131071540030] |I am sure you can easily find the same information for rpm based distros. [131071540040] |But it is not default ntp daemon. [131071550010] |openntpd has to be patched to be usefull on Linux or it won't adjust the system clock. [131071550020] |Moreover, the porting to other OS is made by a separate team and is always behind the main branch, so it has very small chances to become the default ntpd on any other OS than OpenBSD. [131071550030] |If you are looking for an alternative ntp server, the only actual competitor to ntd.org ntpd seems to be chrony. [131071560010] |How to move files with specific extensions to newly created directories [131071560020] |I want to mv files with specific extensions to newly created directories. [131071560030] |This script moves files with all the extentions in all the newly created folders, which is not the intent. [131071560040] |Instead, it should move extension-specific files to specific folders. [131071560050] |How can I accomplish that? [131071570010] |I have not verified this, but it should do the job. [131071580010] |This should work: [131071590010] |X Window opened from screen [131071590020] |Bit of an oddish one, if I've opened an X window over SSH in a screen session will it reopen when screen is reattached? [131071590030] |I've been able to detach and the process stays running fine, with the X window continuing to update with me logged onto the server but I'm just curious if I'll be able to detach and logoff, then reattach and bring the window back. [131071600010] |If I understand you correctly, you logged into a remote system with SSH with X forwarding turned on, and then started up an X application and sent it to the background. [131071600020] |If you log off completely, the SSH tunnel that provides the X forwarding feature will also close, and the X app will be terminated. [131071600030] |However, most likely what will happen when you try to log off is that the SSH session will remain open until the X application exits. [131071600040] |The screen session is inconsequential at this point. [131071600050] |Interestingly enough, there appears to be some work on being able to save and restore X sessions like screen does with terminal sessions. [131071610010] |How do I get a long listing for a single directory? [131071610020] |I want to see a long listing for a single directory. [131071610030] |When I type ls -lha, I see this: [131071610040] |However, when I type ls -lha phpMyAdmin-3.3.5, I get a listing of the contents of the directory. [131071610050] |How do I type the command so that I see only [131071610060] |? [131071610070] |I want to do this with symlinked directories, so I can see where they're linked to, not their contents. [131071620010] |The -d flag is used to tell ls that you want to show the properties of the given directory, not its contents. [131071630010] |Sed to reverse two words [131071630020] |echo "AXIS2C_HOME=/usr/local/Axis2C" | sed 's/(^AXIS2C_HOME=) (.*)/ \2 \1/' [131071630030] |The output i am expecting is /usr/local/Axis2C AXIS2C_HOME= [131071630040] |I cant figure out what i am doing wrong :( [131071640010] |You have an erroneous space after the =. Try: [131071640020] |The following also works and is a bit shorter. \1 will be anything before the first / [131071650010] |The trivial answer is "more backslashes, less spaces": [131071650020] |But the broader answer is, "wait, what are you trying to do?" [131071650030] |Do you want the key-value pairs to be split into useful variables, or are you really just trying to munge the input into the reverse syntax in order t feed it to something else? [131071660010] |Sed is great, but Perl can do this too, and it doesn't require the forest of backslashes: [131071660020] |Plus, you've got the full power of the regular expression engine, so you can do even more complex patterns. [131071670010] |Adding Sleep command to shell script? [131071670020] |How can I add a 10sec sleep to my loop after each page is processed? [131071680010] |Should work [131071690010] |Sources to learn about performing certain tasks on command line? [131071690020] |Can someone transform it into community wiki please? [131071690030] |I guess I've got no privileges for that. [131071690040] |Let's list practical sources for learning about performing certain tasks on *nix command line. [131071690050] |E.g. [131071690060] |I want to delay execution of a script and I don't know about sleep(1). [131071690070] |I can do apropos delay, spot there sleep(1) and finally man 1 sleep. [131071690080] |What else comes to your mind? [131071700010] |commandlinefu one such resource. [131071700020] |The GNU manual is also very useful. [131071700030] |However, my final answer has to be Google, adding 'site:' qualifiers for one of the above sites, or unix.se as appropriate. [131071710010] |How can get a list of all scheduled cron jobs on my machine? [131071710020] |My sysadmin has set up a bunch of cron jobs on my machine. [131071710030] |I'd like to know exactly what is scheduled for what time. [131071710040] |How can I get that list? [131071720010] |Probably depends on the crond you are using. [131071720020] |For example, with Vixie-Cron (debian/ubuntu default) you get it for the current user via: [131071720030] |or for another user via [131071720040] |To get the crontabs for all users you can loop over all users and call this command. [131071720050] |Alternatively, you can look up the spool files. [131071720060] |Usually, they are are saved under /var/spool/cron, e.g. for vcron following directory [131071720070] |contains all the configured crontabs of all users - expect of the system-wide crontab which is at [131071730010] |Depending on how your linux system is set up, you can look in: [131071730020] |
  • /var/spool/cron/* (user crontabs)
  • [131071730030] |
  • /etc/crontab (system-wide crontab)
  • [131071730040] |also, many distros have: [131071730050] |
  • /etc/cron.d/* These configurations have the same syntax as /etc/crontab
  • [131071730060] |
  • /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, /etc/cron.monthly
  • [131071730070] |These are simply directories that contain executables that are executed hourly, daily, weekly or monthly, per their directory name. [131071730080] |On top of that, you can have at jobs (check /var/spool/at/), anacron (/etc/anacrontab and /var/spool/anacron/) and probably others I'm forgetting. [131071740010] |How to automatically encrypt, sign, and forward root's email? [131071740020] |I've recently been teaching myself about the BSDs and decided to pick up a NetBSD VPS. [131071740030] |I don't always log in to this box every day (it's not actually doing anything that important), but I'd still like to monitor root's mail. [131071740040] |To that end, I started looking into how to forward root's mail to an external account. [131071740050] |I learned a little about the /etc/aliases file, and it looks like I might be able to build up a pipeline to do this for me, but I'm treading into unfamiliar territory. [131071740060] |Is there a tutorial that covers this sort of thing? [131071740070] |Is it even a good idea? [131071740080] |Thanks. [131071750010] |You could create a normal user, e.g. juser, and add it to the /etc/aliases file on the right hand side of the root entry. [131071750020] |For normal forwarding of root-mail you would just create a .forward (which contains your external e-mail address) in the home directory of juser. [131071750030] |Regarding encryption you can use a MDA (mail delivery agent for that), e.g. procmail and for that. [131071750040] |Instead of a .forward you have to create a .procmailrc file in the home directory of juser in that case. [131071750050] |Via .procmailrc you can pipe a message (header/body) through an external program, e.g. a simple script that basically contains some gpg command. [131071750060] |And with the right rule at the end of the procmailrc, you can forward the (processed) message to your external e-mail address. [131071760010] |The catch-all way to do this is to edit the /etc/aliases file: [131071760020] |Run newaliases and you should be good to go. [131071760030] |Note you probably have to doctor your MTA (sendmail, or whatever) so that the messages emitted don't get deleted as spam, but that's a larger issue. [131071770010] |How do you run a dry run of rm to see what files will be deleted? [131071770020] |I want to see what files will be deleted when performing an 'rm' in linux. [131071770030] |Most commands seem to have a dry run option to show just such information, but I can't seem to find such an option for rm. [131071770040] |Is this even possible? [131071770050] |(Note: sorry if I gave this question the wrong tags, I wasn't exactly sure what tags were appropriate for the question) [131071780010] |You can say: [131071780020] |to run it in interactive mode, so rm will prompt you to confirm whether each file should be deleted. [131071780030] |You could just answer no to each file to see which ones would be affected. [131071790010] |Say you want to run: [131071790020] |You can just run: [131071790030] |or even just: [131071790040] |to see what files rm would delete, because it's the shell expanding the *.txt, not rm. [131071790050] |The only time this won't help you is for rm -r. [131071790060] |If you want to remove files and directories recursively, then you could use find instead of rm -r, e.g. [131071790070] |then if it does what you want, change the -print to -delete: [131071800010] |How to search the whole of bash history without needing to go forwards and backwards? [131071800020] |At times when I do CtrlR to search through command history, I find that I don't reach the entry I was looking for, even though I know it's in history. [131071800030] |I think this is due to me having passed the entry, forcing me to use another command to go the other direction. [131071800040] |Is there a command that allows the whole of history to be searched by looping through it (restarting the cycle) when it reaches the end? [131071810010] |You could use Alt+> to go back to the end of your history then search again, but maybe you knew that, and it's not what you want. [131071810020] |Unfortunately, I don't think there's a way to make it wrap around. [131071810030] |I can't see anything about it in man bash or man readline. [131071810040] |We could try writing a readline macro, but there's no variable telling us what line of history we're on. [131071810050] |So short of that, here's some other suggestions that you might find viable alternatives: [131071810060] |If your terminal is configured to send Ctrl+S, (e.g. by running stty stop undef), then you can change directions just by pressing it Ctrl+S. [131071810070] |You could use history | grep then type ! to run it. [131071810080] |If that's too long to type, make it a function to save some typing, e.g. [131071810090] |Set up history-search-backward and history-search-forward. [131071810100] |They can be easier to use. [131071810110] |e.g. ls Alt+P, Alt+P , Alt+P will search backwards thru all your ls commands. [131071810120] |Plus you can change directions just by pressing the other shortcut, e.g. if you're at the oldest command, you can just switch from pressing Alt+P to pressing Alt+N. [131071810130] |Set it up by putting this in your /etc/inputrc or .inputrc for bash: [131071810140] |and this in your .zshrc for zsh: [131071810150] |You could even go one step further and make the Up arrow do this. [131071810160] |Finally, are you sure the command is being entered into history? [131071810170] |Maybe it's being ignored due to the HISTCONTROL or HISTIGNORE settings, or it's falling off the end due to HISTSIZE? [131071820010] |How to get history search to start afresh? [131071820020] |When I do a search using CtrlR, the search gets limited to entries older than displayed hit. [131071820030] |How do I reset this so that it starts from the beginning (i.e. from latest history entry) without leaving the shell? [131071830010] |Press Alt+>. [131071840010] |Ctrl+G will abort the search; starting a new one will begin with the latest history entry again [131071850010] |Executing a shared library [131071850020] |Some shared libraries provide an output when called from the command line as if they were executables. [131071850030] |For example: [131071850040] |In a shared library of my own written in C, how can I provide this output? [131071850050] |I've executed now a library I just made and I get a segment fault. [131071860010] |Try defining a void __libc_main() or void _init(). [131071870010] |Do the "play" utility in Linux really have a "delay" to stop it's process with really small files? [131071870020] |I have a wav file, Duration: 00:00:00.17 (less than a second!) [131071870030] |When I call play to execute it, play executes it fine, but my terminal becomes idle for ~4s until play process is done. [131071870040] |Is this by design? [131071870050] |Is it possible to just play the sound and be done in less than a second? [131071870060] |EDIT: Running time as suggested by @jsbillings: [131071880010] |I think this is normal sound system latency (mainly buffering) as well as an artifact of program flow (buffers, synchronous i/o, polling). [131071880020] |The ring buffer[s] being played are presumably much larger than the puny sample lasting for only 00:00:00.17 seconds. [131071880030] |Is this delay proportional to the duration of the sample? [131071880040] |Ie, does a longer sample have a smaller delay? [131071880050] |I would expect a sample of greater size (say a full second or two) to reduce these kinds of delays. [131071880060] |Sound can be really tricky stuff, especially if you look into the nitty gritty. [131071880070] |If what I've said above is true (regarding a longer sample size) I'd say this is normal for whatever sound subsystem you're using. [131071880080] |I myself use pulseaudio for low-latency stuff (like guns in games), but the problem you described isn't really related to low latency; it's more a question of the software waiting for the hardware to tell it when it's done playing the whole buffer, which is larger than the sample it contained. [131071880090] |If I'm wrong about something, please point it out to me. [131071880100] |Thanks :) [131071890010] |Wine vs Virtualbox? [131071890020] |I have used Wine before. [131071890030] |I recently heard of VirtualBox. [131071890040] |Do they do the same thing? [131071890050] |What are the differences and relative merits of these? [131071900010] |They are not the same, no. VirtualBox is a "virtual machine", which means that it create a system where the software inside thinks it is on a real piece of hardware; VirtualBox can run Windows, MacOSX, Linux, SunOS (for x86), etc. [131071900020] |It would be a operating system once you start and then you would need to install the applications you desire to run. [131071900030] |With VirtualBox, a Windows app will look like a Windows app and a MacOSX app will look like a MacOSX app. [131071900040] |Wine is a MS-Windows interface emulator. [131071900050] |It mimics the windowing libraries so a MS-Windows GUI program can display in XWindows instead of WinXP/Vista/Win7. [131071900060] |It is not an environment, it can only run one program (but you can start multiple wine apps). [131071900070] |Because it is mimicing the standard MS libraries, not all Windows programs may run under it if they need additional libraries or if they bypass the standard libraries and try to access lower-level libraries/interfaces. [131071900080] |With Wine, it may not look like a Windows app once it is running. [131071900090] |VirtualBox takes a lot more room (creating a copy of the guest OS), but it is much more reliable than apps using Wine (usually not Wine's fault, but the apps fault). [131071900100] |And if you have an application that needs support programs (like Putty using Pageant), that won't work with Wine, but works very well in VirtualBox. [131071910010] |Multiple "-bash command not found" messages in Mac OS X Snow Leopard Unix [131071910020] |When I open terminal I get an automatic error message -bash: PATH command not found. then I am stumped in getting multiple -bash "x" command not found. messages, even to simple commands such as ls cd mkdir rm. [131071910030] |Just about the ONLY commands I get a response to are echo and export. [131071910040] |Could my Unix system files be corrupt, as has been suggested in some searches for help? [131071910050] |My hunch is that there is some setting that directs me away from the proper spot to make UNIX commands, but I don't know how to fix that. [131071920010] |Your .bashrc or .bash_profile files (or other startup files) contain a typo, and your PATH is invalid. [131071920020] |To really be certain, we'd need to see your .bashrc or .bash_profile files. [131071920030] |Commands like ls mkdir rm won't work because your shell cannot find them in your PATH, because your PATH is invalid. [131071920040] |Commands like echo and "export are built in to Bash, which is why they work. [131071920050] |I can't explain the problem with cd (perhaps a mistake?) [131071920060] |Search your .bashrc or .bash_profile files and look for the lines where PATH is defined. [131071920070] |You might have a bad definition where second PATH doesn't start with a $, like this: [131071920080] |It should say something like this: [131071930010] |Clustering Servers to run one vm [131071930020] |I am looking at clustering about five machines and I wanted to get some opinions about what is the best way to go about it. [131071930030] |I'm wanting to run one VM over the five machines, firstly I want to ask, [131071930040] |1: If I cluster the five machines will their hardware be combined? [131071930050] |HDD's CPU's Ports etc.. [131071930060] |2: Would the hardware be accessible (Usable) within the VM [131071930070] |Ultimately what I am asking is If I cluster five machines and run a VM over them will that make the VM a 'speed demon' combining CPU's and HDD's? [131071930080] |Also what are some of the clustering operating systems out there at the moment? [131071930090] |I have looked around but it seems that a lot of it is deprecated or just old. [131071930100] |If you need any more information just ask. [131071930110] |NOTE: I'm not wanting the cluster to process files parallel, I want the speed of the processors, RAM and HDD's combined into one VM working over top of the cluster [131071940010] |There is no way to "just" combine the resources of the individual machines. [131071940020] |The software you run on top of your cluster has to be written with parallel in mind. [131071940030] |You can't expect a single computation intensive process to magically split up between multiple cores, or even machines. [131071940040] |And even if the software is written for parallel execution, once you go from multiple cores to multiple machines you can easily run into a message passing bottleneck over the slow network link in between machines. [131071940050] |I'm curious, what were the "deprecated solutions" that you found so far? [131071940060] |Does anyone of those do what you're looking for? [131071950010] |What you're talking about is called a Single System Image cluster, or sometimes a distributed shared memory system (in a more limiting context). [131071950020] |There are some projects listed on the linked Wikipedia pages you should look into. [131071950030] |I've used the SGI Altix cluster (the NUMAlink ones), and it can be quite powerful if you have a process that requires a huge memory footprint. [131071950040] |With the increase in CPU speed and memory capacity of individual nodes, this seems to be less of a popular computational clustering technique any more. [131071950050] |Most people use message passing APIs to allow parallel computation to communicate over fast interconnect or simple ethernet. [131071960010] |Locate a directory within an archive [131071960020] |I have .tgz archive i'm trying to locate a directory within it to extra how do i search within a .tgz i tried the following below but no luck. [131071970010] |Would it not be easier all in one go? [131071970020] |I think the reason it appears not to work with xargs is that your grep will find the directory first (pass it on and tar with extract it all) then grep will continue with the contents (pass it on and tar will fail to find those files since it has already extracted them). [131071970030] |But this is only a guess. [131071980010] |grep acting strangely [131071980020] |Grep has been acting strangely on one of my systems (Ubuntu Desktop - all my other systems are Ubuntu Server), and I can't figure out why. [131071980030] |I created a control file named text that contains the following text: [131071980040] |the following commands work on all of my systems except the problem child: [131071980050] |On my problem child grep simply hangs. [131071980060] |I have compared .bashrc, .bash_aliases, and even /etc/bash_completion, but I can't find the problem. [131071980070] |Any ideas what could be causing the problem? [131071990010] |If it hangs, it sounds to me like it is looking for input from stdin. [131071990020] |You can prove that by typing Control-D (once only) - it will return to the shell. [131071990030] |If this is correct then * does not find any files. [131072000010] |On the problem machine, in the directory you grep 'something' * are there any special files like sockets, named pipes (fifos), etc. there ? [131072000020] |At least in the case of a named pipe, grep won't get an EOF from the named pipe until something actually writes an EOF to the named pipe. [131072010010] |What's the name of the technology that instantly boots to Linux from within Windows? [131072010020] |I once had some package that you could run to instantly replace a running Windows instance with a running Linux instance. [131072010030] |I'm not talking about virtualization or coLinux. [131072010040] |I'm talking about the moral equivalent of hot-swapping out the Windows kernel and replacing it with a Linux kernel. [131072010050] |It may have only worked on Win9x for all I can remember. [131072010060] |But I haven't been able to think of the name or find it since I happened upon it many years ago. [131072020010] |I think it's GRUB4DOS using Win9x's option to start DOS applications in compatibility mode [131072030010] |You're probably remembering loadlin [131072040010] |Preloading the OOM Killer [131072040020] |I don't have any swap partition/file on my machine, and only 2GB of RAM. [131072040030] |Sometimes it happens that the memory gets saturated by some process (Xorg+browser+compiler+...) and the system hangs indefinitely, and the only way to restart it (other than hard reset) is with SysRq. [131072040040] |I understood that the Out Of Memory Killer won't help me because when the memory is completely full, kernel cannot allocate the OOM Killer itself. [131072040050] |Is there any way to preload the OOM Killer, so that it can actually work when memory is completely full? [131072040060] |Or is it possible to tweak the kernel so that OOM Killer gets activated when my ram is full at ${TOTAL_RAM} - 10MB? [131072050010] |I'm fairly sure that the kernel reserves some memory for itself, i.e. for launching the oom_killer. [131072050020] |(What use would a oom_killer be if it fails to load due to lack of memory?) [131072060010] |One way to avoid this would be to turn off the heuristic overcommit handling and set it to not overcommit: set the sysctl vm.overcommit_memory=2, and then lower vm.overcommit_ratio. [131072060020] |Read up on this in the kernel docs. [131072060030] |You can also target specific PIDs for preferential treatment by the OOM by modifying /proc/$PID/oom_adj. [131072070010] |The kernel does allocate a minimal amount of free space for itself. [131072070020] |You can see this value with: [131072070030] |This value depends on the amount of RAM (512MB in the case above), you can try to increase it, but I don't think this will solve your problem (further will it increase the chance of getting OOM'd sooner). [131072070040] |The OOM killer should have enough free memory to kill applications, else it would miss the purpose of having one (like chris already pointed out). [131072070050] |Edit: Just as a side-note, I don't think it's the best way to solve a problem concerning user-space programs, just by modifying kernel parameters (OOM values). [131072070060] |The kernel has best knowledge on what's going on and how to handle certain situations. [131072070070] |Rather than playing with those values, try to fix the memory problems the user-space programs (Xorg, browser) generate. [131072070080] |Also, see the comment on the mm/oom_kill.c source file, not even the kernel developers think that the OOM killer should have a lot of work to do in a well configured environment. [131072080010] |Duplicating a Linux installation (Yum-based) [131072080020] |Given an installation based on Yum (specifically in my case, a Scientific Linux 5.1 x86_64 installation), how would I duplicate the installed programs and utilities to a new machine based on Fedora Core x86_64? [131072080030] |The hardware is very similar but not identical, and there's the obvious difference that SL5 is based on EL, not on Fedora; I'm largely aiming to duplicate the user experience from the original box (SL) to the new box (FC). [131072090010] |You can try Kickstart or you may want to set up a PXE install/boot server for multiple distros. [131072090020] |Or if some of your machines are diskless you can try LTPS method (this is what is generally called - thin client - IIRC), also see here [131072090030] |EDIT: If that's the case see this [131072100010] |You can create a list of the installed software with: [131072100020] |Since they are based on different distros I am not sure how you would do the install. [131072100030] |If I was copying it to a fresh install of the same distro I would, run this as root [131072110010] |Whoa whoa whoa [131072110020] |First, you have an RHEL-based source (Scientific Linux). [131072110030] |You can't just go from there to a Fedora based system without giving up something (namely reliability). [131072110040] |First question: Why do you want to go with Fedora? [131072110050] |Second question: If this is for a server, ask yourself the first question again...are you the answer to that question isn't "well, maybe I can use Scientific Linux or CentOS"? [131072110060] |Assuming that this machine isn't going to be used for anything important, you can get a list of the installed programs by running 'yum list installed' and making sure the same packages are installed (although you probably won't get the same versions. [131072110070] |Fedora is much more cavalier about things like that). [131072120010] |Question about how mkinitrd adds kernel modules to the initrd [131072120020] |When creating an initrd using mkinitrd (CentOS 5.5), the kernel modules it adds to the initrd get modified in the process. [131072120030] |For example, the initrd's /lib/sata_via.ko is not binary identical to /lib/modules/2.6.18-194.32.1.el5/kernel/drivers/ata/sata_via.ko. [131072120040] |I am just curious as to what happens when mkinitrd includes a kernel module - does it link in dependencies, or what is it that makes the module change? [131072130010] |You have a /lib/sata_vio.ko in your initrd? [131072130020] |Is (or was) one of your file systems (e.g. / = "root") on a SATA drive that would need that driver? [131072130030] |Does an entry for it appear in /etc/modules or /etc/mkinitrd/modules? [131072130040] |On my Ubuntu system, the module is in the same location inside the initrd image, e.g. /lib/modules//drivers/ata/sata_vio.ko. [131072130050] |What does file say? [131072130060] |What does strings | grep '\ say? [131072130070] |Maybe it's from a different driver or a different kernel version? [131072130080] |Obviously you could use ls -l or du to get an idea if /lib/sata_vio.ko is larger, and run nm -D against both files to see if there is any difference in symbols (e.g. using diff). [131072130090] |The whole process should be documented in man mkinitrd, in particular, it should say what scripts your system runs, perhaps something in /usr/share/initrd-tools/scripts or /etc/mkinitrd/scripts? [131072140010] |CentOS 5.5 Install Customization [131072140020] |I'm having a frustrating time with customizing my initial CentOS 5.5 installation. [131072140030] |I want to have a specific set of the packages installed (e.g. [131072140040] |I want Java and httpd but not OpenOffice or NewsServer) but each time I carefully go through all the customization screens on installation it seems to somewhat randomly select and deselect other options. [131072140050] |I'm trying to write up a repeatable step-by-step doc but something isn't right. [131072140060] |I've tried picking just "Desktop - Gnome" or "Server GUI" or even none of them. [131072140070] |Then I choose the "Customize now" option and go through each of the packages that I want. [131072140080] |What's odd is what comes out after the reboot is different each time. [131072140090] |Does it matter which default package group I choose if I'm going to customize the lists? [131072140100] |Another oddity happens if I try to customize after the fact. [131072140110] |For instance, if I try to just remove OpenOffice, it removes Gnome as a side-effect. [131072140120] |I didn't uncheck the Gnome box but it appears that since OpenOffice is dependent upon Gnome, it gets removed as well. [131072140130] |This implies that I need to do a minimal install and build it up as needed. [131072140140] |Am I missing some core metaphor that the installer is using? [131072140150] |I'm using the exact same DVD image each time so I'm confused why I'm having such different results. [131072140160] |Also I'm using the graphical installation mode. [131072140170] |Ideally I'd be able to create a kickstart config from a good install. [131072140180] |Any pointers would help. [131072150010] |I had better responses over on Server Fault: [131072150020] |http://serverfault.com/questions/233036/centos-5-5-install-customization [131072160010] |Does LVM impact performance? [131072160020] |I have to migrate a few servers to Linux, and one important aspect that I need to evaluate is that my new host system must have elastic storage capacity. [131072160030] |Naturally, doing some basic research, I came across LVM. [131072160040] |My question is: is there any performance penalty for using lvm? [131072160050] |If so, how can I measure it? [131072160060] |What I am considering right now is to have Linux as a host OS with LVM and virtualized Linux boxes running on top of it (should I add LVM on the guest OS as well?). [131072170010] |LVM, like everything else, is a mixed blessing. [131072170020] |With respect to performance, LVM will hinder you a little bit because it is another layer of abstraction that has to be worked out before bits hit (or can be read from) the disk. [131072170030] |In most situations, this performance hit will be practically unmeasurable. [131072170040] |The advantages of LVM include the fact that you can add more storage to existing filesystems without having to move data around. [131072170050] |Most people like it for this advantage. [131072170060] |One disadvantage of LVM used in this manner is that if your additional storage spans disks (ie involves more than one disk) you increase the likelyhood that a disk failure will cost you data. [131072170070] |If your filesystem spans two disks, and either of them fails, you are probably lost. [131072170080] |For most people, this is an acceptable risk due to space-vs-cost reasons (ie if this is really important there will be a budget to do it correctly) -- and because, as they say, backups are good, right? [131072170090] |For me, the single reason to not use LVM is that disaster recovery is not (or at least, was not) well defined. [131072170100] |A disk with LVM volumes that had a scrambled OS on it could not trivially be attached to another computer and the data recovered from it; many of the instructions for recovering LVM volumes seemed to include steps like go back in time and run vgcfgbackup, then copy the resulting /etc/lvmconf file to the system hosting your hosed volume. [131072170110] |Hopefully things have changed in the three or four years since I last had to look at this, but personally I never use LVM for this reason. [131072170120] |That said. [131072170130] |In your case, I would presume that the VMs are going to be relatively small as compared to the host system. [131072170140] |This means to me you are more likely to want to expand storage in a VM later; this is best done by adding another virtual disk to the VM and then growing the affected VM filesystems. [131072170150] |You don't have the spanning-multiple-disks vulnerability because the virtual disks will quite likely be on the same physical device on the host system. [131072170160] |If the VMs are going to have any importance to you at all, you will be RAID'ing the host system somehow, which will reduce flexibility for growing storage later. [131072170170] |So the flexibility of LVM is probably not going to be required. [131072170180] |So I would presume you would not use LVM on the host system, but would install VMs to use LVM. [131072180010] |LVM is designed in a way that keeps it from really getting in the way very much. [131072180020] |From the userspace point of view, it looks like another layer of "virtual stuff" on top of the disk, and it seems natural to imagine that all of the I/O has to now pass through this before it gets to or from the real hardware. [131072180030] |But it's not like that. [131072180040] |The kernel already needs to have a mapping (or several layers of mapping actually) which connects high level operations like "write this to a file" to the device drivers which in turn connect to actual blocks on disk. [131072180050] |When LVM is in use, that lookup is changed, but that's all. [131072180060] |(Since it has to happen anyway, doing it a bit differently is a negligible performance hit.) [131072180070] |When it comes to actually writing the file, the bits take as direct a path to the physical media as they would otherwise. [131072180080] |There are cases where LVM can cause performance problems. [131072180090] |You want to make sure the LVM blocks are aligned properly with the underlying system, which should happen automatically with modern distributions. [131072180100] |And make sure you're not using old kernels subject to bugs like this one. [131072180110] |Oh, and using LVM snapshots degrades performance. [131072180120] |But mostly, the impact should be very small. [131072180130] |As for the last: how can you test? [131072180140] |The standard disk benchmarking tool is bonnie++. [131072180150] |Make a partition with LVM, test it, wipe that out and (in the same place, to keep other factors identical) create a plain filesystem and benchmark again. [131072180160] |They should be close to identical. [131072190010] |In general: If you add a new layer of complexity ("aka more to do") nothing will be faster. [131072190020] |Note: You only add work and not 'change' they way the work is done. [131072190030] |How can you measure something? [131072190040] |Well, you create one partition with LVM and one without, then use a normal benchmark and just run it. [131072190050] |Like the folks at [131072190060] |http://www.umiacs.umd.edu/~toaster/lvm-testing/ [131072190070] |As it seems, only slightly impact to the speed. [131072190080] |That seems to by in sync with the findings of someone else who ran a benchmark: [131072190090] |http://lists-archives.org/linux-kernel/27323152-ext4-is-faster-with-lvm-than-without-and-other-filesystem-benchmarks.html [131072190100] |But just benchmark it on your own and see if your hardware and the OS you want to use behave the same and if you can ignore the (maybe slightly) impact of an additional layer of complexity which gives you elastic storage. [131072190110] |Should you add LVM to the guest OS: That depends on if you need the guest OS to have elastic storage as well, doesn't it? [131072190120] |You needs dictate what you have to deploy. [131072200010] |re-entrency of interrupts in Linux [131072200020] |I was reading "Linux device drivers, 3rd edition" and don't completely understand a part describing interrupt handlers. [131072200030] |I would like to clarify: [131072200040] |
  • are the interrupt handlers in Linux nonpreemptible?
  • [131072200050] |
  • are the interrupt handlers in Linux non-reentrant?
  • [131072200060] |I believe I understand the model of Top/Bottom halves quite well, and according to it the interrupts are disabled for as long as the TopHalf is being executed, thus the handler can't be re-entered, am I right? [131072200070] |But what about high priority interrupts? [131072200080] |Are they supported by vanilla Linux or specific real-time extensions only? [131072200090] |What happens if a low priority interrupt is interrupted by high priority one? [131072210010] |Nonpreemptible means that those handlers cannot be interrupted by another interrupt. [131072210020] |For non-preemtible handlers, reentrance is a non issue since you cannot interrupt them in the first place. [131072210030] |As to what the kernel does, just check wag's answer, I can't comment like he does on this topic. [131072220010] |The Linux kernel is reentrant (like all UNIX ones), which simply means that multiple processes can be executed by the CPU. [131072220020] |He doesn't have to wait till a disk access read is handled by the deadly slow HDD controller, the CPU can process some other stuff until the disk access is finished (which itself will trigger an interrupt if so). [131072220030] |Generally, an interrupt can be interrupted by an other interrupt (preemption), that's called 'Nested Execution'. [131072220040] |Depending on the architecture, there are still some critical functions which have to run without interruption (non-preemptive) by completely disabling interrupts. [131072220050] |On x86, these are some time relevant functions (time.c, hpet.c) and some xen stuff. [131072220060] |There are only two priority levels concerning interrupts: 'enable all interrupts' or 'disable all interrupts', so I guess your "high priority interrupt" is the second one. [131072220070] |This is the only behavior the Linux kernel knows concerning interrupt priorities and has nothing to do with real-time extensions. [131072220080] |If an interruptible interrupt (your "low priority interrupt") gets interrupted by an other interrupt ("high" or "low"), the kernel saves the old execution code of the interrupted interrupt and starts to process the new interrupt. [131072220090] |This "nesting" can happen multiple times and thus can create multiple levels of interrupted interrupts. [131072220100] |Afterwards, the kernel reloads the saved code from the old interrupt and tries to finish the old one. [131072230010] |How to remove multiple lines per occurrence in a file? [131072230020] |Say I have this 857835 line file, containing stuff like this: [131072230030] |And I wish to remove all occurences of a1 and the next line (rubbish1 and rubbish5 in this example). [131072230040] |How do I do it? [131072230050] |I've tried grep 'a1' -v -A1 to no avail, and my sed skillz are not really great :} [131072230060] |My Google-fu has not been able to help me this time, someone please help! [131072240010] |Try: [131072240020] |This means from /^a1$/ to the next line, delete [131072240030] |The ^ and $ ensure you match the whole line, so a hidden a1 will not be matched. [131072250010] |The following will work on non-GNU sed (the ,+1 address syntax is a GNU extension): [131072250020] |"Starting at a line that reads exactly 'a1', and ending at the next line for which the beginning of the line exists (i.e. the next line), delete." [131072250030] |It's much less extensible than @asoundmove's answer, though, as deleting a different number of lines would take an entirely different script. [131072260010] |unable to mount smb share in fedora 14 through nautilus connect to server [131072260020] |I'm unable to mount smb share on my university network. [131072260030] |Trying through the nautilus connect to server dialog just brings the password dialog up repeatedly. [131072260040] |I'm able to mount the share using mount.cifs as root, but it'd be great to mount it through nautilus as well. [131072260050] |More specifically, no matter how many times I try, I get the dialog telling me: [131072260060] |Password required for share courses on courses.its.carleton.edu [131072270010] |The File>Connect to Server... produces the window were I complete the fields as follows: [131072270020] |I use localhost for the server after having invoked ssh at the command line: [131072270030] |Where samba.example is the name behind the firewall. [131072280010] |'fast interrupts' in Linux [131072280020] |Hello again, [131072280030] |as far as I know, Linux has 'fast interrupts', those that were requested with SA_INTERRUPT flag; fast interrupts are executed with all other interrupts disabled on the current CPU. [131072280040] |But how does it differ from the normal interrupt handler behavior (where)? [131072290010] |There is a good write up here: [131072290020] |Older versions of the Linux kernel took great pains to distinguish between "fast" and "slow" interrupts. [131072290030] |Fast interrupts were those that could be handled very quickly, whereas handling slow interrupts took significantly longer. [131072290040] |Slow interrupts could be sufficiently demanding of the processor, and it was worthwhile to reenable interrupts while they were being handled. [131072290050] |Otherwise, tasks requiring quick attention could be delayed for too long. [131072290060] |In modern kernels, most of the differences between fast and slow interrupts have disappeared. [131072290070] |There remains only one: fast interrupts (those that were requested with the SA_INTERRUPT flag) are executed with all other interrupts disabled on the current processor. [131072290080] |Note that other processors can still handle interrupts, although you will never see two processors handling the same IRQ at the same time. [131072290090] |So, which type of interrupt should your driver use? [131072290100] |On modern systems, SA_INTERRUPT is intended only for use in a few, specific situations such as timer interrupts. [131072290110] |Unless you have a strong reason to run your interrupt handler with other interrupts disabled, you should not use SA_INTERRUPT. [131072290120] |So the only difference is the one that you mentioned; that fast interrupt handlers execute with all other interrupt handlers disabled, for faster performance. [131072300010] |As of today, you can mostly forget about the SA_INTERRUPT flag. [131072300020] |In between 2.6.18 and 2.6.24 it was just a migration helper for the new IRQF_DISABLED flag. [131072300030] |2.6.24 removed all SA_* flags and replaced them with IRQF_* flags. [131072300040] |2.6.35 marked this "new" flag as deprecated. [131072300050] |If you have a kernel before 2.6.18, you'll probably won't use it (see Justin's answer). [131072300060] |Today's usage of IRQF_DISABLE differs among the architectures. x86 still only uses it for time critical functions (time.c, hpet.c) and some xen stuff. [131072300070] |Concerning the difference; a normal interrupt can be interrupted by an other interrupt (preemption), a "fast" one on the other hand, can not. [131072310010] |ssh public keys instead of SASL for authenticated SMTP sending? [131072310020] |I'm currently using postfix and SASL on my personal server for authenticated SMTP. [131072310030] |The server is purely for my personal use and my personal domains, so I'd much rather have something simpler based on ssh public keys. [131072310040] |Does any such solution exist? [131072320010] |Nothing prevents networked application to use ssh keys for authentication/encryption, except these applications have to be written to support this (i.e. by using libssh). [131072320020] |You don't say what mail user agent you use, nor what SMTP server it connects to, but it is not very likely they support ssh natively. [131072320030] |But of course, you can use a normal ssh connection to your server to make a tunnel for SMTP sessions.This of course authenticates the users to your machine, not to your SMTP server, which may not be what you want. [131072330010] |While it isn't SSH's pubkey authentication (which is something that only exists in the SSH protocol, not SMTP), you could set up TLS Client certificates. [131072330020] |This will require a valid SSL certificate on the client side. [131072330030] |Also, if you must use SSH's pubkeys, you could simply allow all mail connections from localhost on your personal SMTP server, and set up an SSH tunnel over SSH to port 25 on the SMTP server. [131072340010] |SMTP servers usually don't check the SSL certificates when connecting, but do use them to encrypt the channel. [131072340020] |This makes them easy to configure with self-signed certificates, or certificates from a private authority. [131072340030] |(Self-signed certificates are equivalent to ssh certificates.) [131072340040] |I use tinyCA to create my own certificate authority. [131072340050] |You can use key sizes up to 4096. [131072340060] |You should use a size of at least 2048. [131072340070] |MUAs (Mail User Agents: Firefox, Outlook, etc) work well with self signed certificates. [131072340080] |You do need to accect them the first time. [131072340090] |I suggest using the submission port with startTLS and authentication. [131072340100] |This will give you a secure authenticated channel. [131072340110] |I use Dovecot IMAP with startTLS using self-signed certificates for reading email. [131072340120] |Using IMAP gives extra options like using a WebMail interface in addition to the one or more clients. [131072350010] |Configure your mail server to allow unauthenticated relay from localhost and set up an ssh tunnel for sending mail. [131072350020] |This will forward the local port 8587 through ssh to yourserver on yourserver's port 587. [131072350030] |Then configure your mail client to use localhost port 8587. [131072350040] |Although I would still encourage you to leave SASL authentication on. [131072360010] |What happens in the Top half and Bottom Half processing of Interrupts? [131072360020] |I would like to know more about Top half and Bottom Half processing in the Context on Interrupts. [131072360030] |Could someone explain me the exact things happening in both scenarios. [131072370010] |Chapter 6 of "Linux Kernel Development" by Robert Love explains it, as do these free web resources: [131072370020] |
  • Linux Kernel Module Development Guide
  • [131072370030] |
  • linuxdriver.co.il
  • [131072370040] |
  • Linux Device Drivers
  • [131072370050] |Basically, the top half's job is to run, store any state needed, arrange for the bottom half to be called, then return as quickly as possible. [131072370060] |The bottom half does most of the work. [131072380010] |eth0 r8169 down on wake up from standby [131072380020] |Hi, [131072380030] |On waking up after standby the network (eth0) remains down. [131072380040] |The eth0 card is r8169. [131072380050] |I have tried : [131072380060] |
  • ifconfig eth0 up Doesn't work
  • [131072380070] |
  • ifconfig eth0 up; dhcpcd eht0 Worked but how do I configure my static ip with this, my proxy is ip bound
  • [131072380080] |

    This works:

    [131072380090] |this commands gets the network up with the conventional network manager: modprobe -r r8169; modprobe r8169; service network-manager restart [131072380100] |But is there any way to automate this or modify the acpi scripts so that no modprobe is required in the first place? [131072380110] |

    Config details:

    [131072380120] |OS: Debian-squeeze (6.0) [131072380130] |ethtool eth0 output: [131072380140] |File /etc/network/interfaces [131072390010] |Installation date of Ubuntu [131072390020] |How can I determine when Ubuntu was installed in my computer? [131072390030] |There was a different question posted here, but none of them is helpful. [131072400010] |Use last. [131072400020] |It helped me find the installation date on Fedora 14. [131072400030] |The last line stating wtmp begins Tue Nov 9 22:35:12 2010 is the installation date. [131072410010] |As I found here sudo grep ubiquity /var/log/installer/syslog | less should work for Ubuntu. [131072410020] |last works for Fedora. [131072420010] |Can We Have Different Wallpapers For Each Workspace under Gnome? [131072420020] |I was wondering whether we can have different wallpapers in each workspace of a Linux distro using Gnome. [131072420030] |Googled it. [131072420040] |Solutions I found require desktop effects to be enabled. [131072420050] |My laptop can't take up that much load. [131072420060] |So I wanted to know if their's a way to do so without enabling any effects. [131072420070] |[note] My distro is Fedora 14. [131072430010] |The short answer is not without applying patches. [131072430020] |But you could use a different window manager / desktop environment. [131072430030] |Enlightenment, for example, supports this feature. [131072440010] |Is it possible to change the spacing between files using ls [131072440020] |I'm fairly new to unix and I was wondering if there's some attribute of ls that I can use to change the spacing between the files. [131072440030] |Currently I've aliased my ls to output ls -G, so that I have colored folders. [131072440040] |However, the downside of that is that the files/folders show up a little more squooshed/close together. [131072450010] |You might be able to fix this by either running ls -G -T1, (or --tabsize=1 instead of -T1), or set TABSIZE=1 in your ~/.bashrc. [131072460010] |I don't think ls -G does spacing any differently than ls. [131072460020] |The width depends on how long the file names are in each directory. [131072460030] |Try running /bin/ls and /bin/ls -G in the same directory, and you should notice the spacing is the same. [131072460040] |Then change to a different directory and try it again. [131072460050] |The other possibility is that you are seeing the difference between columns (-C) and across (-x) mode. [131072460060] |Try running ls -C and ls -x and see which one you prefer. [131072460070] |Then make it an alias. [131072470010] |Why would someone use joe? [131072470020] |I've seen some skilled unix/linux users use joe instead of vi(m) or nano. [131072470030] |Why would they prefer using it over the provided alternatives? [131072480010] |It's easier to learn than Vi, faster to start than Emacs, and more powerful than Pico/Nano (e.g. it has ctags support for programming). [131072480020] |But it's unlikely to be installed everywhere, so you should still know the basics of Vi and Emacs. [131072490010] |It uses WordStar key bindings by default. [131072490020] |This was a common word processor in the early 80s, and I even used it in the early 90s. [131072490030] |When I first got into Linux, I looked around for an editor that made sense to me, and hey, there it was. [131072490040] |I imagine that some other now-skilled Unix/Linux users followed the same path, because Linux arrived just at the end of WordStar's effective life. [131072490050] |So, one of the reasons is simply "timing". [131072490060] |Modern versions have syntax highlighting and other fancy features, so I haven't bothered to switch away. [131072490070] |(I know how to use vim for editing config files, though. [131072490080] |That's kind of a mandatory skill.) [131072500010] |How can I (and should I) use my Linux file server as a Time Machine backup server for my Macs? [131072500020] |I have two Macs that I'd like to start backing up using Time Machine, but all of my storage is attached to my Linux file server. [131072500030] |How can I use my Linux file server (which happens to be running Ubuntu Karmic) as a custom Time Capsule replacement, and have my Macs (running 10.6) automatically back up to it using Time Machine? [131072500040] |And lastly, is this wise? [131072500050] |Is there any inherent risk in doing this that compromises the whole point of the backup? [131072510010] |There are a few hack-ish options out there, see here, and here. [131072510020] |But I certainly wouldn't do it. [131072510030] |This is a hack, it's not supported by apple in anyway and there is no guarantee that the next OS X update won't break it and if it does you're stuck with your backups in a network share that is pretty much useless at that point; perhaps if you only need the initial backup you'll make do, but remember that time machine backups are incremental, you are very likely not to be able to restore the latest version of your data. [131072510040] |With some bash $voodoo you might be able to pull it off (getting the initial backup, looking at time stamps, merging...) [131072510050] |From my point of view you have two options: you either stick with the apple supported solution, may that be local drives for time machine or investing on a time capsule, or you find another way to backup your two macs to your linux server avoiding time machine all together. [131072510060] |and on that last note I know a lot of people who swear by superduper, and for many years I've used iBackup which is in reality a glorified GUI for rsync and that gives me some comfort. [131072520010] |Totally unsupported -you gotta pay the Apple tax otherwise- but perfectly doable: [131072520020] |http://www.kremalicious.com/2008/06/ubuntu-as-mac-file-server-and-time-machine-volume/ [131072530010] |Keep in mind that (afaik), TimeMachine relies on directory hardlinks. [131072540010] |installed java, but java -version not showing new java [131072540020] |I just installed jdk1.6.0. I have tried multiple ways to set the path variable and make java 1.6 my new running version. [131072540030] |And nothing is working. [131072540040] |Running centos5 virtual machine. [131072540050] |alternatives --config java returns the following: [131072540060] |java -version returns the following: [131072540070] |echo $PATH returns the following: [131072540080] |1.6.0 is installed in the following location: /usr/local/testing/jdk1.6.0_23/bin [131072550010] |I figured this out. [131072550020] |I just appended the new java path to the beginning of my path variable instead of the end and it worked. [131072550030] |That worked. [131072550040] |Now i am going to set in my script so it will do automatically. [131072560010] |Try removing /opt/j2sdk/bin and /usr/local/testing/jdk1.6.0_23/bin from your PATH. [131072560020] |When you ran alternatives --config, it should have changed /usr/bin/java to point to the 1.6.0 version. [131072560030] |If you add them to PATH and are using alternatives, you will have two places to configure things, and it will get confusing. [131072570010] |Copy/rename multiple files using regular expression (shell script) [131072570020] |I have a collection of files matching a pattern such as 'assignment02.cc', 'assignment02.h', 'assignment02.txt', etc. [131072570030] |I would like to copy/rename these files into 'assignment03.cc', 'assignment03.h', 'assignment03.txt', and so on. [131072570040] |I suspect this should be straight forward using a shell script and sed. [131072570050] |So far I have: [131072570060] |Unfortunately the way I am invoking sed always returns the input string $name and doesn't actually do the regex find/replace. [131072570070] |Note: Looking on the internet there is a rename command which has this functionality, but that isn't available on my MacBook. [131072580010] |One way you could do it is: [131072590010] |Using sed here introduces more complexity than it's worth. [131072590020] |Use the shell's built-in text manipulation features. [131072590030] |For example, ${name#assignment02} strips the prefix assignment02 from $name. [131072590040] |The double quotes are not necessary if you are sure the file names contain no shell special characters. [131072590050] |If you have zsh available, its zmv function is helpful for these kinds of renamings. [131072600010] |Youre already thinking too complex [131072600020] |'rename' replaces the first arg ('02') with the second arg ('03') in the name of all files given on arg3 or after (*02*) [131072610010] |A different approach: [131072610020] |In the long run: Try to get 'rename' [131072620010] |Scrolling through ls output without a mouse [131072620020] |In some directories there are quite a few files. [131072620030] |When I use ls the output is more than my terminal can handle. [131072620040] |Typically I just use one of the following: [131072620050] |depending on my mood. [131072620060] |However, I lose the color coding and I only get a single file/directory per line. [131072620070] |Is there a way to emulate scrolling through my terminal with my mouse after I use ls without actually having to scroll? [131072620080] |I checked the man page for ls and I didn't see anything that would allow for this but I may have missed something. [131072630010] |ls doesn't control scrolling through the terminal output, your terminal does; by the time you're doing that ls has already terminated. [131072630020] |Your terminal probably supports a hotkey to handle it; Page Up and Shift+Page Up are common choices [131072640010] |You want ls -C --color=yes | less -R. -C forces ls into multi-column mode even when it's being piped, --color=yes forces ls to always output in color, even when being piped, and the -R argument to less forces it to interpret raw terminal escape codes. [131072640020] |In the general case, you might also consider GNU Screen if your terminal isn't configured to support hotkeys like Michael Mrozek mentioned. [131072650010] |How can I open port 21 on a Linux VM? [131072650020] |I need to open port 21 on a Linux (CentOS 5) virtual machine I have. [131072650030] |I have tried several Google solutions, but none are working. [131072650040] |I was wondering if someone could tell me how to do this. [131072650050] |Below is the output of netstat -tulpn: [131072650060] |tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 3576/hpiod tcp 0 0 0.0.0.0:611 0.0.0.0:* LISTEN 3397/rpc.statd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 3365/portmap tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 3020/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3629/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 3582/python tcp 0 0 :::22 :::* LISTEN 3595/sshd udp 0 0 0.0.0.0:68 0.0.0.0:* 3278/dhclient udp 0 0 0.0.0.0:605 0.0.0.0:* 3397/rpc.statd udp 0 0 0.0.0.0:608 0.0.0.0:* 3397/rpc.statd udp 0 0 0.0.0.0:5353 0.0.0.0:* 3729/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 3365/portmap udp 0 0 0.0.0.0:57333 0.0.0.0:* 3729/avahi-daemon: udp 0 0 0.0.0.0:631 0.0.0.0:* 3020/cupsd udp 0 0 192.168.201.90:123 0.0.0.0:* 3611/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 3611/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 3611/ntpd udp 0 0 :::5353 :::* 3729/avahi-daemon: udp 0 0 :::52217 :::* 3729/avahi-daemon: udp 0 0 fe80::20c:29ff:fe66:123 :::* 3611/ntpd udp 0 0 ::1:123 :::* 3611/ntpd udp 0 0 :::123 :::* 3611/ntpd [131072650070] |And here is the output of iptables -L -n: [131072650080] |Chain INPUT (policy ACCEPT) target prot opt source destination [131072650090] |Chain FORWARD (policy ACCEPT) target prot opt source destination [131072650100] |Chain OUTPUT (policy ACCEPT) target prot opt source destination [131072660010] |I figured it out. [131072660020] |I don't have an FTP server running on the machine I am trying to connect to. [131072660030] |Brain dead mistake. [131072670010] |What is the Linux equivalent of DOS "dir /s /b filename"? [131072670020] |List all files/dirs in or below the current directory that match 'filename'. [131072680010] |You can do this with [131072690010] |The direct equivalent is [131072690020] |If you only want to list files called , and not directories, do this: [131072690030] |If you want to use wildcards, you need to put quotes around it, e.g. [131072690040] |otherwise the shell will expand it. [131072690050] |As others have pointed out, you can also do: [131072690060] |grep will print lines based on regular expressions, which are more powerful than wildcards, but have a different syntax. [131072690070] |See man find and man grep for more details. [131072700010] |Some shells allow ls **/filename, which is quite convenient. [131072710010] |Restarting X in CentOS 5 [131072710020] |I went to a CentOS terminal, and someone was logged in doing work. [131072710030] |I hit Ctrl+Alt+F2 to try to get to my own session so I could check something. [131072710040] |When I hit Ctrl+Alt+F1 to get back, X did not come back up and gave a bunch of error messages. [131072710050] |Is this a bug? [131072710060] |Did that other user just lose all the work they had running? [131072710070] |It seems like if I had to kill all X and do a startx again, he would lose everything running in the terminals and other GUI apps, right? [131072720010] |I admittedly know nothing about CentOS, but usually F1 through F6 are reserved for TTYs; X sessions don't start until F7. Hitting Ctrl+Alt+F7 should get you back to the first X session [131072730010] |Can less invoke vim instead of the default vi when I hit the 'v' key? [131072730020] |I want to launch vim instead of the default vi editor when I hit v in less. [131072730030] |Are there any settings I can modify to allow me to do this? [131072740010] |From man less, [131072740020] |Simply set standard EDITOR environment variable according to your wishes, e.g. export EDITOR=vim in ~/.bashrc or something like that. [131072750010] |You can also make an alias alias v='vim' and save it in your /.bashrc [131072760010] |Can we get compiler informations from an elf binary? [131072760020] |Is there some chance to know how a binary was built, under Linux? (and or other Unix) [131072760030] |Compiler, version, time, flags etc... [131072760040] |I looked at readelf and couldn't find much, but there might be other ways at analyzing the binary code/section etc... [131072760050] |Anything you know how to extract? [131072770010] |There isn't a universal way, but you can make an educated guess by looking for things only done by one compiler. [131072770020] |GCC is the easiest; it writes a .comment section that contains the GCC version string (the same string you get if you run gcc --version). [131072770030] |I don't know if there's a way to display it with readelf, but with objdump it's: [131072770040] |I just realized I ignored the rest of your question. [131072770050] |Flags aren't generally saved anywhere; they would be in a comment section most likely, but I've never seen that done. [131072770060] |There's a spot in the COFF header for a timestamp, but there's no equivalent in ELF, so I don't think the compile time is available either [131072780010] |You can try using strings command. [131072780020] |It will create lots of text output by checking it you might guess the compiler. [131072780030] |pubuntu@pubuntu:~$ strings -a a.out |grep -i gcc GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3 [131072780040] |Here i know its compiled with gcc but you can always redirect strings output to file and examine it. [131072780050] |There is one very good utility called peid for windows but I cant find any alternative of it on linux. [131072790010] |rc control user [131072790020] |Hi, [131072790030] |I am playing around with rc scripts. [131072790040] |I am starting a daemon from rc.local. [131072790050] |I would like to know how I start the process under a specific user instead of root? [131072800010] |What daemon? [131072800020] |Most daemons come with a commandline or config option to drop privileges. [131072800030] |But if you're looking for a generic way, try: [131072810010] |I can't ssh on localhost at a certain port on os x [131072810020] |Here are basics information: [131072810030] |It's like that because I'm using MacPorts and it did install there. [131072810040] |I did the sudo port load openssh [131072810050] |When doing a netstat -an | grep LISTEN on reboot. [131072810060] |I have this: [131072810070] |then here are the results of nmap: [131072810080] |Now what happen when I try to ssh on localhost [131072810090] |When specifying the 2222 port. [131072810100] |Succeed! [131072810110] |The reason: I found it in the sshd_config file on the /opt/local/etc/ location. port 2222 here the file: [131072810120] |So I decided to change the port in that file to 22 [131072810130] |relaunch the service with unload/load as follow: [131072810140] |Well, I'm feeling lucky and I try ssh localhost [131072810150] |No such a thing as luck I suppose. [131072810160] |Here is a -vv of the command: [131072810170] |I feel like a noob. :/ [131072810180] |What do you think? [131072820010] |OS X comes with sshd already. [131072820020] |It's running if you enable "Remote Login" in System Preferences under Sharing. [131072820030] |If all you want to do is make it listen on a non-default port, the trick is as follows: [131072820040] |
  • Open /System/Library/LaunchDaemons/ssh.plist in your favorite text editor.
  • [131072820050] |
  • Find the SockServiceName key.
  • [131072820060] |
  • Change the string value to something like ssh-alt, then save the plist file.
  • [131072820070] |
  • Add an entry for ssh-alt to the /etc/services file.
  • [131072820080] |
  • Go into the Sharing preference pane and toggle the "Remote Login" checkbox off and back on. [131072820090] |You'll find that the native sshd is now listening on the other port.
  • [131072820100] |You'd think you could avoid all that by editing /etc/sshd_config, but you'd be wrong. [131072820110] |The native sshd pays attention to the plist file, only. [131072830010] |Choose system reliable Linux Distribution? [131072830020] |My question is about choosing Linux distribution that don't cause hardware errors. [131072830030] |Here is my problem, I have Western Digital 500 hard drive, and first Linux I use it was Ubuntu 9.04 and everything was ok, that I move to Ubuntu 10.04 LTS. [131072830040] |After a while the Diagnostic Tool show me that HD has bad sectors. [131072830050] |The store that I bought my computer replace the hard drive. [131072830060] |I install Ubuntu 10.04, and after 4-5 mouths the Diagnostic Tool show me bad sectors. [131072830070] |My HD is still on warranty and they again will replace it but, I must replace the Linux Distribution, this is too much for my nerves. [131072830080] |Someone can give me explanation to which distro to move. [131072830090] |My experience with Linux is: openSuse 10.02 (some problems with flash drivers) Ubuntu 9.04 and Ubuntu 10.04. [131072840010] |The hardware problems could not have been caused by your Linux distribution. [131072840020] |It seems that you're just very unlucky with hard drives, or you are mishandling them (vibration or mechanical stress could damage the hard drive surface). [131072850010] |I seriously doubt it that any Linux distribution, let alone a popular one that is used by millions of people can cause bad sectors on the hard disk. [131072850020] |More so, same OS components like kernel and file system modules are shared by many Linux distributions. [131072850030] |It can be just bad coincidence or there might be something else wrong with your system - I had problems with hard disks caused by faulty motherboard before. [131072860010] |It's unlikely that your OS is causing the failures. [131072860020] |As disks increase in size and data density, the likelihood of encountering a bad sector is increasing. [131072860030] |I would expect a new disk to see bad sectors as time went on. [131072860040] |More likely, any other distros you've used that didn't report failures were simply not reporting them, or didn't even detect them (silent failures). [131072860050] |Also, I'd check with your warranty repair -- you might be getting refurbished units that have filled their bad sector defect tables already. [131072860060] |Read up on using smartctl to diagnose the problems on your disk. [131072860070] |(Although research has proven that not all disk problems are reported by SMART, it's still a good place to start.) [131072870010] |Boot Linux from UEFI BIOS [131072870020] |I'm porting UEFI BIOS. [131072870030] |I'd like to download an Linux image (bzImage) to system memory by TFTP in my UEFI shell, and then boot the OS directly. [131072870040] |I know we generally need another bootloader to do that. [131072870050] |But is it possible to boot Linux in UEFI BIOS? [131072870060] |And how? [131072880010] |As far as I know, a UEFI firmware (not BIOS, that's something else) can only load UEFI applications corresponding to the EFI firmware architecture. [131072880020] |So you can't directly load a Linux kernel, but you should be able to load a UEFI bootloader which will then load the Linux kernel into memory and jump to it. [131072880030] |I know of GRUB2 and ELILO which support UEFI, you may want to check them out. [131072890010] |Set visible directories for SFTP access? [131072890020] |I am setting up SFTP access to one of my machines running Linux with the Dropbear SSH server. [131072890030] |When I SFTP onto the machine remotely, I can see the entire filesystem on it, even if I might not have write access. [131072890040] |How to I control what directories a user can see when connecting to my machine via SFTP? [131072890050] |For example, what if I only want to make one directory, e.g. /ftp/, visible and accessible? [131072890060] |Thanks. [131072900010] |I believe you'll need to run your dropbear ssh server inside a chroot'd jail if you want to restrict it to certain directories. [131072900020] |If you were using a recent OpenSSH, I'd suggest using the ChrootDirectory setting in your sshd_config. [131072900030] |It doesn't appear as though dropbear has a similar parameter, so you'll have to do it manually. [131072910010] |cdrdao is installed, but Brasero doesn't think so [131072910020] |When I try to copy CD to image, Brasero tells me that cdrdao isn't installed. [131072910030] |However I reinstalled cdrdao and Brasero, but the error is still there. [131072910040] |[system] Ubuntu Lucid (10.04) [131072920010] |How to 'burn' a subtitle track onto an mp4 video file [131072920020] |I would like to make a subtitle file to be a part of an mp4 video file, so that I don't have to deal with two separate files. [131072920030] |I imagine two ways: [131072920040] |
  • Make the subtitle an intrinsic part of the video. [131072920050] |This will require video re-encoding. [131072920060] |I don't see any advantage of this approach.
  • [131072920070] |
  • Make the subtitle a separate stream, but still embedded in the same video file. [131072920080] |This is far more preferable, especially because I can disable it (unlike the other approach), or even play with the font type/size.
  • [131072920090] |How do I do things the 2nd way? [131072920100] |It would be kool to see what the command for the 1st way looks regardless. [131072920110] |(2 questions in 1 :) [131072930010] |I finally found a way to do it!... [131072930020] |I gave up on the ffmpeg option... [131072930030] |The documentation says its possible, but I can't figure it out, and it seems that no one has posted a working example on the net. (but lots of questions about it)... [131072930040] |So, I got to thinking about what sort of paramaters the Windows MP4Box was using for its subtitles... (the slightest hint can be helpful)... [131072930050] |As fate had it, in my search I stumbled on something about an mp4 tool being talked about on a Mac oriented site... well this got me thinking that if that progam is available on the Mac, maybe it is in the Ubuntu repository, and yes, you've guessed it... [131072930060] |It is in the repos... [131072930070] |It is called 'gpoc'... [131072930080] |That's the name of package, but the name of the actual commandline program (it is a CLI) is MP4Box [131072930090] |Here is an example of what worked for me.. [131072930100] |I used a video-only and audio-only, but it will surely(?) work with a normal audio+video "movie" .... [131072930110] |It's very late so I'll just leave it at that... ... [131072930120] |Note: The subtitles works in Totem, but not in SMPlayer... [131072940010] |Try something like: [131072950010] |The Matroska (mkv) container format supports text-based subtitles embedded as a separate stream into the file. [131072950020] |You could use mkvmerge to remux the file to .mkv and include the subtitles in the output, which you can enable/disable when playing the video. [131072950030] |Note that this method will NOT re-encode the video or audio, it's just putting the same data into a different container format, so it will not decrease the quality of the video.