[131053760010] |How to efficiently use 3D via a remote connection? [131053760020] |I have one weak PC (client) but with acceptable 3D performance, and one strong PC (server) which should be capable of running an application using OpenGL twice, i.e. once locally and once remotely for the client. [131053760030] |Currently, I ssh -X into it, but the client's console output states software rendering is used and I only get 3 frames per second (fps). [131053760040] |Actually, ssh's encryption is not necessary since this is on a LAN, but it's what I already know for remote applications... [131053760050] |So, how can the client performance be increased? [131053760060] |My ideas are [131053760070] |
  • use hardware acceleration, but the server's or the client's one and how?
  • [131053760080] |
  • use something different than ssh
  • [131053760090] |I know, in full resolution and without sophisticated compression a 100 Mbit/s LAN won't make more fps, but it's a windowed application of ca. 800x450, so theoretically up to 12 fps (at 24 bits/pixel) should be possible using uncompressed graphical data. [131053760100] |And maybe something better is possible using the client's own GPU or some smart compression. [131053760110] |-- [131053760120] |edit Turns out what I want is basically a local version of what e.g. onlive and gaikai offers. [131053760130] |Is there something like this for Linux (and possibly free)? [131053760140] |-- [131053760150] |edit2 VirtualGL looks like the best solution (though currently not working for me), but I wonder if it is possible to do hardware rendering on the client, too [131053770010] |The 3D used direct rendering, witch means it's ... direct to the hardware. [131053770020] |You won't be able to get any good 3D performance over X protocol, if at all. [131053770030] |But you've got your systems set up backwards. [131053770040] |You don't need a lot of CPU to serve files. [131053770050] |Make the less powerful CPU the server, and use the more powerful one as your client (and exchange the graphics card if you can). [131053780010] |You could check out VirtualGL together with TurboVNC should provide you with 20fps @ 1280x1024 on 100 Mbit (see wikipedia). [131053780020] |Do note that it might not work with all applications, it depends on how they use OpenGL. [131053790010] |Which filesystem to use for a partition shared with a parallel windows instalation? [131053790020] |On my new PC I want to install both Linux and Windows, each on their own rather small partition, and put the big rest of the 1 TB HDD in a further partition (plus swap). [131053790030] |Which filesystem should I use? [131053790040] |My thoughts: [131053790050] |
  • NTFS. [131053790060] |Linux has write support, but I noticed on my external HDD a huge perfomance drop when only a few GB (of 500 GB) were left free - suddenly a few hundred MB took half an hour to copy... [131053790070] |Any ideas why? [131053790080] |Also no file permissions with Linux, although that is not 100% necessary but would be a bonus
  • [131053790090] |
  • fat32. [131053790100] |Too old, won't support that partition size anyway
  • [131053790110] |
  • ext3. [131053790120] |Windows can read for example via ext2ifs, but what about good write support? [131053790130] |I'd even consider a small virtual machine with a tiny Linux installation to only provide a NFS share to its host windows (probably qemu, distro recommendation are appreciated)
  • [131053790140] |
  • ext4. I lack the experience with it...
  • [131053790150] |It looks lke NTFS is the way to go for now (just as it was two years ago), but I'd prefer a less proprietary solution... [131053800010] |If you don't mind running a VM. [131053800020] |You could use it to share your partition via samba (simpler in Windows then NFS) [131053810010] |I use an ext3 partition with ext2ifs and it works fine for reading and writing. [131053820010] |If you run coLinux on Windows (through the andLinux distribution or otherwise), you can use it to access any filesystem that Linux supports. [131053830010] |What are the strength and weakness of different Unixes [131053830020] |In a few points, what are the strength and weakness of your favourite Unix. [131053830030] |Tell us the unique features that other implementations do not offer. [131053840010] |How to setup a "data" partition shared by multiple Linux OS'es? [131053840020] |Currently I have a NTFS partition that contains shared data. [131053840030] |My rationale is, NTFS doesn't have any idea about file permissions, so I won't have any trouble using them in a multi-boot system (currently I have Gentoo and Ubuntu, and the data partition is auto-mounted on both). [131053840040] |Now I want to get rid of the NTFS thing, if possible. [131053840050] |So the question is, how can I use something like ext4 and setup the same thing? [131053840060] |Update: Sorry I should have made it clear that I only have Linux distributions, so no problem with ext4. [131053840070] |I just want to have a partition that contains world-readable files and is automounted at boot. [131053850010] |Your choice of filesystem depends on the operating systems that will need to read and write to the filesystem. [131053850020] |Since Windows doesn't natively support EXT4, and there is no 3rd party product to allow Windows to write to EXT4, I would use NTFS or FAT32 if Windows is one of the operating systems you need to share access to that data. [131053860010] |NTFS does have file permissions. [131053860020] |Either you squashed them through mount options or you used consistent user mappings or you made your files world-accessible. [131053860030] |If you use a filesystem whose driver doesn't support user mappings, you have several options: [131053860040] |
  • Arrange to give corresponding users the same user IDs on all operating systems.
  • [131053860050] |
  • Make files world-accessible through an access control list (this requires a filesystem that supports ACLs; ext[234] do, but you may have to add the acl mount option in your /etc/fstab). [131053860060] |Run the following commands to make a directory tree world-accessible, and to make files created there in the future world-accessible: [131053860070] |
  • Mount the filesystem normally and provide a view of the filesystem with different ownership or permissions. [131053860080] |This is possible with bindfs, for example: [131053860090] |Or as an fstab entry: [131053860100] |NTFS has the advantage that it's straightforwardly sharable with Windows, it is not a requirement for Windows sharing. [131053870010] |Use bindfs. [131053870020] |In short it adds more owners to the same folder. [131053870030] |It gives you more flexibility and its simple. http://ubuntuforums.org/showthread.php?t=1460472 [131053880010] |Xargs and rm with a * [131053880020] |I am trying to execute the following command [131053880030] |The problem is that when I add the /* part the command does not work. [131053880040] |I want to remove the directory contents (not the directory itself). [131053890010] |Commands like rm rely on the shell to expand wildcards, so you'd need to invoke a shell somewhere. [131053890020] |Perhaps [131053900010] |What about rm -r a*/*? [131053900020] |This should solve your issue. [131053910010] |(Warning: I haven't tested any of these commands. [131053910020] |There may be typos or errors. [131053910030] |Try everything with echo or ls and no sudo first.) [131053910040] |General advice: xargs is rarely the simplest way to do something. [131053910050] |Without -i (which is deprecated by the way, use the portable -I {} instead), xargs expects an input format that no standard or common tool produces. [131053910060] |With -I, xargs does have some use, though if you're piping find into it, find -exec is simpler. [131053910070] |You need a shell to expand {}/*, and that shell must act after {} has been replaced by one actual path. [131053910080] |A simple solution is to do the line-by-line processing in the shell. [131053910090] |Note the use of three patterns to capture all the files in a directory as * omits dot files; it doesn't matter if some of the patterns don't match any file, as rm -f ignores any argument that doesn't reference an existing file. [131053910100] |Xargs is no use here since you have to process the files one by one anyway at some point, in order to get the shell to perform the globbing. [131053910110] |As the general technique can be useful however, here's a way to feed file names back into a shell from xargs. [131053910120] |The _ is the $0 argument to the shell (it's there in part not to have to bother to stick $0 back onto any subsequent argument, and to ensure that everything works even if $1 begins with a - and so would otherwise be treated as an option to the shell). [131053910130] |There are two approaches here: either have the shell loop over its arguments, or tell xargs to pass a single argument to each shell invocation. [131053910140] |I'm assuming that ls -d a* is a toy example and that you have a more complex command that produces one file name per line. [131053910150] |Note that ls -d a* would not work there, as most ls implementations do not output nonprintable characters transparently. [131053910160] |If the is a find command, use -exec, as in [131053920010] |Another option is to use a shell loop instead of invoking xargs at all. [131053920020] |E.G. [131053920030] |Mostly for the sake of example, you can also combine the two techniques by putting a shell construct in the middle of a pipeline: [131053920040] |The -d option to xargs just specifies that the delimiter between arguments is a newline, which avoids potentially catastrophic problems with filenames which contain spaces (for example deadly / file) [131053930010] |What is the state of the Linux Standard Base? [131053930020] |In learning some assembly programming I have found the documents of the Linux Standard Base very useful. [131053930030] |It seems they tell me how things are supposed to be (on standard based systems), not just how they are in the implementation I have in front of me. [131053930040] |On the wikipedia article there are two 2005 articles linked that suggest there is contention around this standard. [131053930050] |2005 is a long time ago, what is the current view? [131053930060] |(Note: just this year the linux foundation certified many distributions for LSB 4.0, so they are still in the game working with some distributions. [131053930070] |Their press release of course does not mention any possible contention around it.) [131053940010] |The world has moved on. [131053940020] |Autotools and distro packagers reconcile differences in libraries and interfaces and adjust as necessary. [131053950010] |Linux standards base is a set of APIs that are guaranteed to be available on an LSB compliant installation. [131053950020] |This mostly requires installing some other free software libraries that most distributions already have available - such as POSIX compliant libc, C++ compiler support, Python, Perl, GTK+ and Qt. [131053950030] |All major Linux vendors today ship LSB compliant operating systems, that includes RedHat, Debian, Ubuntu and Novell - so I don't believe there is much contention about it. [131053950040] |Back when LSB first started people were a bit "meh - who cares". [131053950050] |Later there was contention about which APIs to include: if Perl is to be required then what about Python? [131053950060] |If GTK+ is required then what about Qt developers? [131053950070] |This would have made for some pretty fancy flame wars if not for the "meh" attitude of many operating system vendors towards LSB. [131053950080] |Eventually all this was settled by the Linux Foundation being all inclusive and supporting multiple APIs that do the same thing and now it looks that everyone is content. [131053960010] |KDE starting without anything else (no panels, no window manager/decorator) [131053960020] |So, I've installed KDE in Arch Linux using the kde package, as well as phonon-xine from pacman. [131053960030] |Maybe it's because I had GNOME before, maybe it's because I did something wrong, but when I log in from KDM, all I get is a small white terminal in the bottom right corner. [131053960040] |Now, from this terminal, I can launch plasma-desktop, and from the actual launcher I can get a konsole session open, and then I can start KWin or compiz and KDE4-Window-Decorator and have a usable desktop environment. [131053960050] |I don't think this is expected behavior, though. [131053960060] |How do I get it to launch all this stuff properly (and anything else I'm forgetting) when I log in from KDM? [131053960070] |Also, since I've installed KDE, GNOME is all messed up - but one thing at a time, and if I like KDE enough I'll uninstall GNOME anyway. [131053970010] |When you log in using KDM, there's a menu called "session type", which you can use to choose which window manager or desktop environment to start. [131053970020] |It sounds like in your case the "xterm session" or "fallback session" is selected. [131053970030] |If you select KDE instead, it should start up correctly. [131053980010] |How can I avoid the prompts when installing a FreeBSD port? [131053980020] |When I install a port, I am often presented with a menu screen to select configuration options. [131053980030] |If I'm going to install a really big package with lots of dependencies, that will be extremely inconvenient. [131053980040] |Is there a make flag for accepting the default answers for all such prompts? [131053990010] |Probably BATCH, described in ports(7), is what you're looking for: [131053990020] |make rmconfig removes OPTIONS config for this port, and you can use it to remove OPTIONS which were previously saved when you configured and installed screen(1) the first time. OPTIONS are stored to directory which is specifed via PORT_DB_DIR (defaults to /var/db/ports). [131053990030] |If you use bash, BATCH can be set automatically every time you log in: [131053990040] |Hope this helps. [131054000010] |I think it's worth mentioning that you might not always want to do this. [131054000020] |I seem to remember, for instance, needing to config emacs to add xft support. [131054000030] |If you want to bypass the prompts for a single build, [131054000040] |will work as well. [131054010010] |wubi pros and cons [131054010020] |Do you have experience using Wubi and if so, what are the major pros and cons ? [131054010030] |I am of course particularly interested in potential problems. [131054020010] |

    Pros

    [131054020020] |
  • If a computer needs to have Linux installed temporarily, Wubi is easier to remove without leaving obvious traces. [131054020030] |Without Wubi, you need to re-enlarge the Windows partition and restore the Windows bootloader. [131054020040] |With Wubi, you just use the supplied uninstaller. [131054020050] |You can even temporarily hide the Ubuntu entry from the Windows bootloader.
  • [131054020060] |
  • For someone who wants to try out Linux but is afraid of harming Windows, Wubi is less scary.
  • [131054020070] |

    Cons

    [131054020080] |
  • Suppose a user has a computer with Wubi, has upgraded something but isn't sure what, is thousands of kilometers away, and phones you because his computer won't get past the bootloader stage. [131054020090] |What next? [131054020100] |There are far fewer resources about troubleshooting Wubi than with traditional Grub-based dual boot.
  • [131054020110] |
  • Similarly, if you need to troubleshooting or rescue the system with a live CD, Wubi reduces your options.
  • [131054020120] |
  • Increasing the size of the Wubi partition image wasn't supported last time I looked.
  • [131054020130] |
  • Wubi induces a small performance loss. [131054020140] |(But if you're even considering Wubi, you probably don't care.)
  • [131054020150] |
  • If Windows screws up its filesystem, you lose Linux as well. [131054020160] |(Though I get the feeling NTFS is reliable these days.)
  • [131054020170] |

    My advice

    [131054020180] |Only use Wubi if you have a compelling reason. [131054020190] |If you're new to Linux and aren't sure you want to keep it, you can go ahead and use Wubi. [131054020200] |But consider this a temporary installation, don't invest into it. [131054020210] |If you find you want to start customizing things, or if you run into any technical problem, throw the Wubi installation away and start over a proper partition. [131054020220] |It won't harm your Windows partition (in fact, it will affect your Windows installation even less than Wubi). [131054020230] |If you have a specific concealment requirement for political reasons (“we don't want no stinkin' Linux on our network”), Wubi is viable, but not necessarily hassle-free. [131054020240] |Note that for hesitant or discreet installations, an alternative in some use cases is to run Linux in a virtual machine: either coLinux (a virtualized Linux running on Windows), for example with andLinux (Ubuntu on coLinux); or a more general VM technology such as VirtualBox or VMWare. [131054030010] |Is there truth to the philosophy that you should sync; sync; sync; sync? [131054030020] |When I was first introduced to Linux, working at Cisco Systems in 2000, I was taught the merits of the sync command, used to flush buffers to disk to prevent filesystem corruption / data loss. [131054030030] |I was told not only by coworkers there, but by friends in college to always run sync "a few" or "a bunch" of times, that is, maybe 5 - 10 times, instead of just once. [131054030040] |I've continued this habit ever since, but, is there any merit to this? [131054030050] |Has anyone else ever heard this? [131054030060] |And most importantly, can anyone provide good rationale / empirical evidence for/against the idea that you need to run sync more than once for it to be effective? [131054040010] |I heard it (sorry, I forget where) as typing the sync command three times (as in: S Y N C Return, wait for the prompt, repeat, repeat). [131054040020] |I also read that the origin was a particular system where it would take a couple of seconds for the disk to finish flushing its buffers, even after it had told the operating system everything was fine. [131054040030] |Typing the command twice more gave the disk enough time to settle. [131054040040] |It seems that over the years, the purpose was forgotten, and the advice was abbreviated as sync; sync; sync which wouldn't have had the desired effect (since the disk had reported the “all clear”, the second and third syncs would complete instantly and the prompt would come back too early). [131054040050] |I have never heard of a system where multiple sync operations have any use, and I am highly skeptical any exist. [131054040060] |I consider this an urban legend. [131054040070] |On the other hand, I find it highly believable that there would be systems where you should wait a couple of seconds after sync'ing and before powering down. [131054040080] |Googling leads to a few independent concurring analyses, e.g. The Legend of sync. [131054040090] |See also Is execution of sync(8) still required before shutting down linux?. [131054050010] |Dot Matrix printer setup in Debian [131054050020] |I am using Debian which is similar to Ubuntu. [131054050030] |They have 7 dot matrix printers, some of which are very old (example), which have worked since 1979 and are faster than many inkjets at printing text. [131054050040] |I believe that it has his own language, since sending text to serial port (port server) prints garbage. [131054050050] |However I think is prints only capital English up to 95 ASCII and Greek and the rest up to 127 I think Greek capital. (special chip) [131054050060] |I think that sending English capital letters prints garbage, but I am not sure and will try again. [131054050070] |The other printers are ESC/P compatible and I use a generic Epson driver provided by ghostscript. [131054050080] |However, I think that sending text via [131054050090] |prints the text as a graffic and changing the printer's font face (courier, times roman, etc) or pitch has no effect. [131054050100] |I am wondering if there is any work around for this? [131054050110] |In AIX they claim that the lp command printed output as text as it prints and cobol programs send raw text to lp printers. [131054050120] |However, in AIX they use some custom filters for the printers and have more options for dot matrix printers. [131054050130] |I would like to know if there is a way to avoid graphics mode for text and change the font face. [131054050140] |The most straightforward approach would be to use no driver, just send ESC/P from cobol but this requires too much work. [131054060010] |As we call `#!/bin/perl` 'shebang', what we call `-*- perl -*-`? [131054060020] |The first line of a script may be in format of #! interpreter, so the shell will execute the script using the specific interpreter. [131054060030] |This is so called shebang or shbang. [131054060040] |Similarly, there can be some special comment like [131054060050] |or [131054060060] |etc. [131054060070] |Just like shbang is used to denote which interpreter to use, these comments are used to adjust the editor's behavior. [131054060080] |How to call these special comments? [131054060090] |Maybe shdash or shvim? [131054060100] |Are there some already defined slangs for these? [131054060110] |And is there wiki pages to show the list of shxxx, and those softwares follow such conventions? [131054070010] |Those are called modelines, at least in vim. [131054070020] |See :help modeline -- not exactly slang, but at least it's something to call them. [131054080010] |viewing info documentation with vim [131054080020] |I'm trying to use the ManPageView addon to view info documentation within vim. [131054080030] |Although I find ManPageView very useful for, well, viewing manpages, its info-reading functionality doesn't seem to work at all. [131054080040] |But then I almost never read info documentation, mostly because it doesn't seem worth taking the time to learn the info interface. [131054080050] |So I may be doing it wrong. [131054080060] |Anyhow: running :Man info.i as suggested by the ManPageView documentation gives me [131054080070] |I see some success if I delete the following lines, which start at line 345 in the file autoload/manpageview.vim: [131054080080] |" call Decho("top-level info: manpagetopic<".manpagetopic.">") endif [131054080090] |Deleting those lines will allow :Man info.i to work, but I still can't follow links in the displayed page. [131054080100] |It works by calling info on the command line and capturing the output, so perhaps it's just that info's CLI has changed? [131054080110] |Specifically, $ info info Advanced will bring up the topic "Advanced" within the "info" node. [131054080120] |I dug around in the addon's source code for a while but didn't find any obvious way to direct it to pass the node as an extra argument instead of doing what it seems to be supposed to do, which is to wrap the node name in parentheses and prefix it to the topic, passing the combination as a single command-line argument to info. [131054080130] |I've not tried to use this addon to view info documentation before. [131054080140] |I'm running an updated ubuntu 10.10, using the vim-gnome package. [131054080150] |I've filed some semblance of a bug report at what seems to be the recommended location [131054080160] |Have others had success using this addon to view info documentation? [131054080170] |I feel like I'm lost in the mists that eternally shroud the outer reaches of Obscurity. [131054080180] |note: Whilst composing this message, I've discovered the info addon, which seems to work acceptably, at least at first glance. [131054090010] |No. [131054090020] |The right place to fill bug report is directly to the author. [131054090030] |If the author uses a tracker for its plugins, it may be a better solution (it depends on each script maintainer) [131054100010] |I've discovered that the easiest way to view info docs in vim is to just open them. [131054100020] |They are just gzipped text with some binary codes added in as markup. [131054100030] |This is especially useful to know in cases where it's not practical to install an addon. [131054100040] |The location of the info docs is distro-specific; under ubuntu, and presumably other debian-likes, they are at /usr/share/info/*.info.gz. [131054100050] |They are gzipped, but vim will handle the translation for you if you just open them. [131054100060] |I actually prefer this manner of viewing them to using the info reader, as it presents the docs as one long file that you can quickly search or page through. [131054100070] |Having an addon would still be useful to facilitate following links and such. [131054100080] |I actually haven't tried using the info addon mentioned in the update; I haven't needed to look at an info file since then. [131054110010] |How do I merge two *.avi files into one? [131054110020] |I have 2 *.avi files: [131054110030] |Which GUI allows me to merge these? [131054120010] |What controls turning off during overheating on Linux [131054120020] |I started experience overheating (I have a few possible troublemakers for it and the cause is separate question) but I wondered what controls handling of overheating? [131054120030] |Sometimes my computer: [131054120040] |
  • Halts properly ("System is going down NOW")
  • [131054120050] |
  • Is halted by controller (screen turns blank, goodbye all not written to disk cache)
  • [131054120060] |Hence my question - how to change behavior to hibernation and/or hybrid-hibernations (well - arbitrary command - from that point I could handle)? [131054120070] |Is is possible to specify treshholds to increase safe limits? [131054130010] |Well, usually you can set this temperature in the BIOS settings and it depends on the CPU type - I presume your CPU is getting hot, not some other hardware part. [131054130020] |If you are running linux, you can always construct some script reading out temperatures from /proc/acpi/... files - you can find temperature information there on some systems. [131054130030] |Or you can use software like Lm_sensors which can also find temperature sensors. [131054130040] |Then I guess you could construct script which reads out temperature and issue sync and shutdown early to avoid hard crash. [131054140010] |Turning off splitting lines in vim [131054140020] |One of the most annoying features of vim is splitting lines. [131054140030] |For some reasons someone found it would be good to split lines even in whitespace sensitive formats. [131054140040] |For example changing: [131054140050] |to [131054140060] |How to turn it off? [131054140070] |Edit: It is not visual wrap - it is wrapped in file (i.e. vim inserts \n in file if it considers line too long). [131054140080] |Wrapping long lines is sane and while one may want to turn of I can live without it. [131054150010] |I assume you are talking about the feature where long lines are visually "wrapped" to be shown on multiple lines. [131054150020] |(This does not interfere with the content of the file though, it is just the way the text is presented visually.) [131054150030] |Issue the following command to turn of this feature: [131054150040] |If you are seeing that vim is really splitting lines automatically (not just visually but by actually inserting line breaks) then you must have configured vim to limit text width; this is not enabled by default. [131054150050] |In that case you can disable it again like this: [131054150060] |Edit your .vimrc configuration file to make these changes permanent. [131054160010] |set formatoptions-=tc [131054160020] |See :help fo-table for more info. [131054160030] |Wim's suggestion of set textwidth=0 should have the same effect, though a lot of annoying filetype plugins will undo that for you. [131054160040] |(Ugh.) [131054170010] |How do I tell a script to wait for a process to start accepting requests on a port? [131054170020] |I need a command that will wait for a process to start accepting requests on a specific port. [131054170030] |Is there something in linux that does that? [131054180010] |The best test to see if a server is accepting connections is to actually try connecting. [131054180020] |Use a regular client for whatever protocol your server speaks and try a no-op command. [131054180030] |If you want a lightweight TCP or UDP client you can drive simply from the shell, use netcat. [131054180040] |How to program a conversation depends on the protocol; many protocols have the server close the connection on a certain input, and netcat will then exit. [131054180050] |You can also tell netcat to exit after establishing the connection. [131054180060] |An alternative approach is to wait for the server process to open a listening socket. [131054180070] |Or you might want to target a specific process ID: [131054180080] |I can't think of any way to react to the process starting to listen to the socket (which would avoid a polling approach) short of using ptrace. [131054190010] |Suggestions for a command line IRC client that supports downloads? [131054190020] |It's been awhile since I've used IRC and the last time I did was on a windows system with mIRC. [131054190030] |I'm interested in finding a command line client that supports downloads and some degree of automation for those downloads. [131054190040] |Any suggestions? [131054200010] |For commandline IRC, the most popular or commonly-used one is probably irssi. [131054200020] |It's very robust, very flexible, highly extensible with scripts and layout themes, very well-documented, and has a decent community of users and supporters. [131054210010] |The shell is a program whose primary role is to enable the user to call other programs and make them interact. [131054210020] |In a unix context, a shell is almost always a command-line interpreter. [131054210030] |More precisely, unless otherwise specified, a unix shell is compatible with the POSIX / Single UNIX shell specification. [131054210040] |Most unices have a such a shell available as /bin/sh. [131054210050] |

    Shell implementations

    [131054210060] |

    Main Bourne-style shells

    [131054210070] |
  • The Bourne shell is one of the two surviving shells from the old days, now mostly superseded by various shells called ash, ksh and bash. [131054210080] |The POSIX specification builds on the Bourne shell.
  • [131054210090] |
  • bash Bash is a Bourne-style, POSIX-compliant shell from the GNU project. [131054210100] |It is the default interactive and scripting shell on most Linux distributions, and available on most other unices. [131054210110] |Bash adds many features, both for scripting and for interactive use.
  • [131054210120] |
  • ksh Ksh is a Bourne-style, POSIX-compliant shell. [131054210130] |It adds many advanced features, mostly for scripting. [131054210140] |Although ksh has been open-source since 2000, it is still less favored in the open source world, and there are several partial ksh clones.
  • [131054210150] |
  • zsh Zsh is mostly Bourne-style, but with a few syntactic differences. [131054210160] |It has a POSIX emulation mode. [131054210170] |It has many extra features, both for scripting and for interactive use.
  • [131054210180] |
  • busybox Busybox is a contains a mostly POSIX-compliant shell with some line edition capabilities together with many simple utilities. [131054210190] |It is targetted towards embedded systems.
  • [131054210200] |

    Other well-known shells

    [131054210210] |
  • csh tcsh The C shell is one of the two surviving shells from the old days. [131054210220] |It is not favored for scripting. [131054210230] |The main implementation today is tcsh. [131054210240] |C shells used to have more interactive features than Bourne-style shells, but bash and zsh have now overtaken tcsh.
  • [131054210250] |
  • fish Fish is a relative newcomer inspired by classical and aiming to combine power and simplicity.
  • [131054210260] |

    Further reading

    [131054210270] |
  • What are the fundamental differences between the mainstream *NIX shells?
  • [131054210280] |
  • Object-oriented shell for *nix
  • [131054210290] |

    Interactive use

    [131054210300] |These are features commonly found in shells with good interaction support (bash, tcsh, zsh, fish): [131054210310] |
  • command line edition, often with configurable key bindings.
  • [131054210320] |
  • command-history a history of commands that can be navigated with the Up and Down keys, searched, etc.; also a recall mechanism based on expanding sequences beginning with !.
  • [131054210330] |
  • autocomplete completion of partially-entered file names, command names, options and other arguments.
  • [131054210340] |
  • job-control management of background processes.
  • [131054210350] |
  • prompt showing a prompt before each command, which many users like to configure.
  • [131054210360] |
  • alias defining short names for often-used commands
  • [131054210370] |

    Further reading

    [131054210380] |
  • What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?
  • [131054210390] |
  • What are your favorite command line features or tricks?
  • [131054210400] |
  • What features are in zsh and missing from bash, or vice versa?
  • [131054210410] |
  • Colorizing your terminal and shell environment?
  • [131054210420] |
  • best way to search my shell's history; How to access the history on the fly in unix?
  • [131054210430] |

    Shell scripting

    [131054210440] |Shells have traditional control structures (conditionals, loops) as well as means to combine processes (in particular the pipe). [131054210450] |They have built-in support for a few tasks such as arithmetic and basic string manipulation, but rely on external commands for other things. [131054210460] |Almost every unix-like system provides a POSIX-compliant shell, usually as /bin/sh. [131054210470] |So scripts aiming to be portable between unix variants should be written according to that standard, and start with the #!/bin/sh shebang line. [131054210480] |Many systems have at least ksh or bash available. [131054210490] |These provide a number of useful extensions, though not always with the same syntax. [131054210500] |Features present in both (and in zsh) include local variables in function, array variables, the double bracket syntax for conditionals ([[ … ]]), and (requiring an option to be set in bash and zsh) additional globbing patterns such as @(…). [131054210510] |A common difficulty in shell programming is quoting. [131054210520] |Unlike in most programming languages, almost everything is a string, and quoting is only necessary around special characters. [131054210530] |However some cases are tricky. [131054210540] |In particular, a common pitfall is that variable and common substitions ($foo, $(foo)) undergo further expansion and should be protected by double quotes ("$foo", "$(foo)") unless that further expansion is desired. [131054210550] |

    Related tags

    [131054210560] |
  • quoting a tricky aspect of shell programming
  • [131054210570] |
  • wildcards globbing, i.e., using patterns to match multiple files
  • [131054210580] |
  • io-redirection connecting the input or output of a command to a file
  • [131054210590] |
  • pipe connecting the output of a command to the input of another command
  • [131054210600] |
  • file-management text-processing two common shell tasks
  • [131054210610] |
  • utilities Shells often call external utilities dedicated to one particular task. [131054210620] |Some have their own tags as well: awk cd cp date dd find grep ls mv rm sed
  • [131054210630] |

    Further reading

    [131054210640] |
  • Practical tasks to learn shell scripting.
  • [131054210650] |
  • $VAR vs ${VAR} and to quote or not to quote
  • [131054210660] |
  • How do I delete a file whose name begins with “--”?
  • [131054220010] |The shell is unix's command-line interface. [131054220020] |You can type commands in a shell interactively, or write scripts to automate tasks [131054230010] |SSH is a protocol for running commands on a remote computer. [131054230020] |

    Implementations

    [131054230030] |
  • Dropbear is a lightweight implementation of SSH targeted at embedded devices.
  • [131054230040] |
  • OpenSSH, developed by the OpenBSD project, is by far the most common implementation of SSH, both server-side and client-side, in the unix world. [131054230050] |If someone mentions SSH in a unix context, assume OpenSSH unless told otherwise.
  • [131054230060] |
  • PuTTY is an SSH client mostly found on Windows.
  • [131054230070] |

    Related programs

    [131054230080] |
  • AutoSSH: Automatically restart SSH sessions and tunnels
  • [131054230090] |
  • Corkscrew: tunnel through HTTP proxies
  • [131054230100] |
  • SSHFS: mount remote filesystems over SSH
  • [131054230110] |

    Troubleshooting

    [131054230120] |If public key authentication doesn't work: make sure that on the server side, your home directory (~), the ~/.ssh directory, and the ~/.ssh/authorized_keys file, are all writable only by their owner. [131054230130] |In particular, none of them must be writable by the group (even if the user is alone in the group). chmod 755 or chmod 700 is ok, chmod 770 is not. [131054230140] |What to check when something is wrong: [131054230150] |
  • Run ssh -vvv to see a lot of debugging output. [131054230160] |If you post a question asking why you can't connect with ssh, include this output (you may want to anonymize host and user names).
  • [131054230170] |
  • If you can, check the server logs, typically in /var/log/daemon.log or /var/log/auth.log or similar.
  • [131054230180] |
  • If public key authentication isn't working, check the permissions again, especially the group bit (see above).
  • [131054230190] |

    Further reading

    [131054230200] |
  • Keep SSH Sessions running after disconnection.
  • [131054230210] |
  • SSH easily copy file to local system.
  • [131054230220] |
  • Quoting in ssh $host $FOO and ssh $host “sudo su user -c $FOO” type constructs
  • [131054230230] |
  • What steps does the system go through when handling an SSH connection?
  • [131054230240] |
  • How can I pause up a running process over ssh, disown it, associate it to a new screen shell and unpause it?
  • [131054240010] |SSH (Secure SHell) is a protocol for running commands on a remote computer [131054250010] |Is there any Project Management FLOSS that has Resource leveling? [131054250020] |An explanation of what Resource leveling is. [131054260010] |Have you tried openproj? [131054260020] |I have never used it myself, but that's the closest that I know of. [131054270010] |How can you schedule a computer to boot at a specific time? [131054270020] |Usually BIOSes have an option to schedule a time to which to boot at. [131054270030] |Is there a Unix/Linux interface to specify the scheduled boot? [131054280010] |NVRAM WakeUp claims to do it; I've never tried. [131054280020] |It may not work on all BIOSes, and if it fails a likely consequence is to overwrite a different critical setting that could make your computer unbootable, so use with caution. [131054280030] |If you only suspend the computer, APM tools can set a wake-up time with apmsleep. [131054280040] |I've successfully used my laptop as an alarm clock with this tool. [131054280050] |But it can't wake up a powered off computer. [131054280060] |If your computer supports ACPI (all modern ones do), and if ACPI actually works on your OS (that, on the other hand, is not a given), there is a standard interface for specifying a wake-up time. [131054280070] |Under Linux, date -u +%s -d 'tomorrow 6:00' >/sys/class/rtc/rtc0/wakealarm should do the trick. [131054280080] |There is a good guide on ACPI wakeup on the MythTV wiki. [131054280090] |If you have another powered-on device on the local network, you can send your computer a wake-on-LAN packet. [131054280100] |Most modern BIOSes support this (you may need to be enable it in the BIOS settings). [131054280110] |The wakeonlan utility can send such packets. [131054290010] |Is there a BSD operating system that can be booted off a logical drive partition? [131054290020] |From several BSD operating systems' documentation, there is a requirement for it to be booted off a primary partition. [131054290030] |Is there any BSD that can be booted off a logical partition in some indirect manner? [131054290040] |If not, what is/are the technical reason(s)? [131054300010] |I can think of three hurdles: [131054300020] |
  • The OS itself. [131054300030] |As far as I know, this not a problem, since all BSDs store their own partition table (the a, b, c, … partitions) independently from the PC partition table (the slices in BSD terminology).
  • [131054300040] |
  • The bootloader. [131054300050] |This can be a problem, because bootloaders operate under very tight code size constraints, and every feature is an added burden. [131054300060] |But once the bootloader gets to the point where it reaches the BSD partition data, you've won. [131054300070] |Grub can boot a number of BSDs, but not all versions of Grub can boot all versions of *BSD.
  • [131054300080] |
  • The installer. [131054300090] |Here there's no significant size constraint, but supporting logical partitions does add to the complexity. [131054300100] |Still, even if the installer automation doesn't support it, you might get somewhere by issuing the right shell commands at the right time.
  • [131054300110] |Looking at specific variants: [131054300120] |
  • FreeBSD: The installation manual is silent on the topic. [131054300130] |There is a success report; see also this discussion.
  • [131054300140] |
  • NetBSD: The installation manual states that “NetBSD installs in one of the four primary BIOS partitions”. [131054300150] |Supposedly you can install and boot on a logical partition, if you figure out how.
  • [131054300160] |
  • OpenBSD can boot on a logical partition since 4.4, though the installation manual says that “extended partitions may not work”.
  • [131054310010] |When using a Linux console, is it possible to jump to the next or previous word on the command line with CTRL + Right or Left arrow keys? [131054310020] |In terminal emulation applications, pressing CTRL + Left / Right arrows jumps from one word to the previous or next one. [131054310030] |Is it possible to have the same functionality in a Linux console, whether it is in text or in framebuffer modes? [131054310040] |In my configuration, the CTRL + arrow keys are transformed into escaped character sequences and not interpreted. [131054320010] |You can set vim as your command line editor and then hit ESC and jump around vim style (forward, back, end, $, 0, etc) [131054330010] |Emacs-style shortcuts Alt+F, Alt+B work by default with all readline-powered command line programs, like shells. [131054340010] |This is possible if and only if the terminal sends different escape sequences for Ctrl+Left vs Left. [131054340020] |This is not the case by default on the Linux console (at least on my machine). [131054340030] |You can make it so by modifying the keymap. [131054340040] |The exact file to modify may depend on your distribution; on Debian lenny, the file to modify is /etc/console/boottime.kmap.gz. [131054340050] |You need lines like [131054340060] |You might as well choose the same escape sequences as your X terminal emulator. [131054340070] |To find out what the control sequence is, type Ctrl+V Ctrl+Left in a shell; this inserts (on my machine) ^[O5D where ^[ is an escape character. [131054340080] |In the keymap file, \033 represents an escape character. [131054340090] |Configuring the application in the terminal to decode the escape sequence is a separate problem, . [131054350010] |Debian/Grub2: Moving root partition to new drive? [131054350020] |Does anybody have a suggestion for how to move the root partition to a new drive and set up grub2 to boot on that drive? [131054350030] |I seem to have no luck instructing grub-mkconfig what it is I want to do (e.g. chroot'int into my new root just confuses all the scripts). [131054350040] |Background I am running Debian Squeeze on a headless low-power NAS. [131054350050] |My current setup is / on sda0 and /boot on sde0 (a CF card): I needed the separate /boot because sd[a-d] need to do a delayed spin-up. [131054350060] |Now I've found an old 2.5" IDE disk to use as / including /boot to allow me to spin all the big disks down. [131054350070] |What I've tried Basically I went [131054350080] |Then I tried [131054350090] |But that failed with grub asking if root was mounted. [131054350100] |Then I did a half-hearted attempt at setting up /mnt/newroot/grub/grub.cfg to find the kernel image on sdf5, followed by a grub-install --root-directory=/mnt/newroot /dev/sdf. [131054350110] |But this just landed me a grub rescue prompt when I tried booting from sdf. [131054350120] |My backup plan is to just reinstall, so a bonus question (no checkmarks for this one): What do I have to do to get my lvm2 and mdadm config across? [131054350130] |Is it all stored in the filesystems (and will it be automatically discovered), or do I need to take of it myself? [131054350140] |Solution (thanks to Maciej Piechotka): As Maciej points out, I need to to a proper chroot for all the grub tools to work. [131054350150] |For reference, this is how I did it: [131054350160] |All the files are now copied (see here for a discussion of copy strategies). [131054350170] |Fix the new etc/fstab to point to new root: [131054350180] |Finally, mount dev,sys and proc to the new root and chroot: [131054350190] |We are now chrooted to the future root exactly as it will look. [131054350200] |According to Maciej, it should be ok to just call grub-install, but I did an update-grub first to get a look at the generated /boot/grub/grub.cfg before installing the bootloader. [131054350210] |I am not sure it will be automatically updated? [131054360010] |Mount basic filesystems and copy/modify files while chrooting like: [131054360020] |
  • /dev (mount -o bind /dev/ /path/to/chroot/dev)
  • [131054360030] |
  • /proc (mount -t proc none /path/to/chroot/proc)
  • [131054360040] |
  • /sys (mount -t sysfs none /path/to/chroot/sys)
  • [131054360050] |IIRC that worked for me while installing Grub 2 in arch and numerous times on Gentoo. [131054360060] |Then after chroot to /path/to/chroot command was simply: [131054360070] |As of lvm2 (and I belive madm but I haven't used it) the configuration is stored on disk. [131054360080] |There is configuration what should be read to discover devices. [131054360090] |Assuming your devices are in standard locations (/dev/sd* or /dev/hd*) there should be no problem. [131054360100] |PS. [131054360110] |I would not trust simple cp of live system as there are several places where it can go wrong: [131054360120] |
  • Forgot to change /etc/fstab and other useful files
  • [131054360130] |
  • Files changed during access
  • [131054360140] |
  • Coping garbage (/tmp etc.)
  • [131054370010] |you can install grub from live distro without chrooting: [131054380010] |Easily updatable troubleshooting LiveUSB distro [131054380020] |I need easily updatable troubleshooting LiveUSB distro. [131054380030] |By troubleshooting I mean that it should not have nothing fancy (X is used o display gparted etc.) but should have access to file archivers/undeleting etc. tools. [131054380040] |Unfortunately most LiveUSB distros are modified LiveCD ones so they assume sequential and slow base device (as opposed to USB flash disk which is fast compared to CD, have 0 latency and have much number of writes to it). [131054380050] |The only method of updating is overwriting the previous content. [131054380060] |I'd like to have 'easy' package manager on distro to allow installing tools I need and from time to time an update which would not destroy any customisation. [131054380070] |On the other hand I still would like to have autodetection scripts etc. [131054390010] |why not just install a light-weight distro directly onto a USB stick? [131054390020] |My solution to this problem is to have a normal install of Crunchbang #! on my USB key, which I can update, tweak, install extra tools, personal scripts etc. to, and so on. [131054390030] |Works a treat! [131054390040] |You could use any distro you fancy, of course. [131054390050] |Crunchbang is a good choice, but you'll probably have your own preferences. [131054400010] |How to back up initial state of external backup drive? [131054400020] |I've picked up an HP Simplesave external drive. [131054400030] |It comes with some fancy software that is of no use to me because I don't use Windows. [131054400040] |Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. [131054400050] |I'd like to save the drive's initial state so that I can restore it if I decide to sell it. [131054400060] |The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. [131054400070] |I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. [131054400080] |I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. [131054400090] |The drive is formatted with a single 500GB NTFS partition. [131054400100] |My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. [131054400110] |This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. [131054400120] |I tried gzipping the output of dd. [131054400130] |This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. [131054400140] |The time to do that first 18GB was about 30 minutes. [131054400150] |So I've resorted to dumping the disk state and partition data separately. [131054400160] |
  • I've dumped the partition state with [131054400170] |
  • I've also created a compressed image of the NTFS partition (the only one on the disk) with [131054400180] |Is there anything else I should do to ensure that I can restore the precise original state of the drive? [131054410010] |sfdisk -d dumps the partition table but not the rest of the boot sector, so if there was a bootloader on the disk it won't be restored. [131054410020] |You can save the boot sector with head -c 512 bootsector.img. [131054420010] |Sending input to a screen session from outside [131054420020] |My scenario is this: [131054420030] |I have a screen session running in a remote location. [131054420040] |Inside this screen is a consoled-based program. [131054420050] |When run without screen, this program starts in the terminal and accepts commands on its standard ipnut. [131054420060] |What I want is a way to remotely send a command to screen so that this command is received by the console program. [131054420070] |Maybe like this: [131054420080] |My PC -> SSH Send Msg Auto -> Screen Session -> Program (Run command recieved) [131054420090] |So from a remote PC I can send via ssh commands to the screen which sends them to the program. [131054420100] |The program accepts them and executes them. [131054430010] |If I understand correctly, you want to send input to a program running inside a screen session. [131054430020] |You can do this with screen's stuff command. [131054430030] |Use screen's -X option to execute a command in a screen session without attaching to it. [131054430040] |If you want to see the program's output, see the hardcopy, log and logfile commands. [131054440010] |OpenBSD is a BSD variant with a strong emphasis on security [131054450010] |What installer types should commercial software use to support Linux? [131054450020] |The source code in not open or free, so compilation at installation is not an option. [131054450030] |Have seen some companies provide a tar.gz file and it is up to user to uncompress in suitable location. [131054450040] |Have seen some companies provide a .tar.gz with an install.sh script to run a basic installer, possibly even prompting user for install options. [131054450050] |Have seen some companies provide RPM and/or deb files, allowing user to continue using the native package management tools they are familiar with to install/upgrade/uninstall. [131054450060] |Would like to support the most number of Linux distributions, make users' lives as easy as possible, and yet maintain as little build/packaging/installer infrastructure as possible too. [131054450070] |Looking for recommendations. [131054460010] |My preference is always for a package (rpm|deb etc.). [131054460020] |Depending on the nature of the software it may be worth targeting packages for specific distros (rhel/centos etc.), but you'll probably never be able to roll enough packages for everyone. [131054460030] |Install scripts can be ok, depending on the script. [131054460040] |For me, the most important thing with non packaged software is that it's easy to install it at a location I choose. [131054470010] |I see two ways to look at it. [131054470020] |One is to target the most popular Linuxes, providing native packages for each, delivering packages in popularity order. [131054470030] |A few years ago, that meant providing RPMs for Red Hat type Linuxes first, then as time permitted rebuilding the source RPM for each less-popular RPM-based Linux. [131054470040] |This is why, say, the Mandriva RPM is often a bit older than the Red Hat or SuSE RPM. [131054470050] |With Ubuntu being so popular these past few years, though, you might want to start with .deb and add RPM later. [131054470060] |The other is to try to target all Linuxes at once, which is what those providing binary tarballs are attempting. [131054470070] |I really dislike this option, as a sysadmin and end user. [131054470080] |Such tarballs scatter files all over the system you unpack them on, and there's no option later for niceties like uninstall, package verification, intelligent upgrades, etc. [131054470090] |You can try a mixed approach: native packages for the most popular Linuxes, plus binary tarballs for oddball Linuxes and old-school sysadmins who don't like package managers for whatever reason. [131054480010] |Games tend to use an installer (formerly Loki Installer, nowadays MojoSetup), which installs a game cleanly into a prefix and handles stuff like icons. [131054490010] |Whatever you do, make sure you include a "support script" that allows you to collect as much information as possible on the target system for troubleshooting errors. [131054490020] |I guarantee you will run into issues and debugging things the customer says versus reality is often very different. [131054500010] |This tag is about the history of Unix systems and their main components. [131054500020] |For the recall of past commands in shells and other applications, use command-history. [131054500030] |

    Further reading

    [131054500040] |
  • Evolution of Operating systems from Unix
  • [131054500050] |External links: [131054500060] |
  • Wikipedia article
  • [131054500070] |
  • Simplified Unix timeline
  • [131054510010] |The history of Unix systems and their main components [131054530010] |Use this tag for interoperability with Windows (dual boot, virtual machines, mixed networks, porting software, … [131054540010] |Which tools for ASCII portfolio visualization? [131054540020] |I. I want to do a simple graph from timestamp YEAR-month-day to valuation, not wanting to use spreadsheets. [131054540030] |Is there some ASCII tool for it to see it on CLI? [131054540040] |There are over 500k lines of data, and I want to see only a sketch of it like in Ascii: [131054540050] |II. [131054540060] |Then, I want to see allocation like in pizza slices: [131054540070] |I know how-to-get grappy CSV data in Python but totally newbie in visualization: [131054540080] |Before I do my own ASCII visualization thing, I want to know whether such thing exists. [131054540090] |How do you visualize your portfolio in ASCII? [131054550010] |Searching here you will find that gnuplot (in dumb terminal mode) has been suggested before. [131054560010] |Best way to work through / display a tree of images sorted by size... [131054560020] |I've got a deep directory tree containing .PNG files. [131054560030] |I'd like to find all the .PNG files in the directory, sort them in order of size from smallest to largest, and then display every 50th image. [131054560040] |(I'm analyzing data and trying to find the best size cutoff between "potentially useful" and "random noise" so I want an easy way to skim through these thousands of images....) [131054560050] |Any scripting help is appreciated. [131054560060] |I know how to use 'find' to search by size, but not how to sort the results or run a display program to display every 50th without having it pause the process waiting. [131054560070] |I am using MacOS Snow Leopard, btw. [131054560080] |Thanks! [131054570010] |Just combine find to search for *.png, ls or stat to get the file size, sort to sort by file size and awk to print only every 50th line. [131054580010] |Is that size as in file size, or size as in image dimensions? [131054580020] |In zsh, to see all .png files in the current directory and its subdirectories, sorted by increasing file size: [131054580030] |There's no convenient glob qualifier for grabbing every N files. [131054580040] |Here's a loop that sets the array $a to contain every 50th file (starting with the largest). [131054580050] |Without zsh or GNU find, there's no easy way of sorting find output by metadata (there's find -ls or find -exec ls or find -exec stat, but they might not work with files containing non-printable characters, so I don't like to recommend them). [131054580060] |Here's a way to do it in Perl. [131054580070] |And here's a way to view every 50th file (starting with the largest): [131054580080] |Another approach would be to create symbolic links in a single directory, with names ordered by file size. [131054580090] |In zsh: [131054580100] |With Perl: [131054590010] |If you don't have GNU tools or if your filenames contain lots of special characters, I would use one of Gilles' excellent solutions. [131054590020] |However, here is one solution using GNU find, sort, cut and awk-- basically the set of tools that alex suggested: [131054590030] |Here I've used eye-of-gnome (eog) as the image viewer, mostly because it was one of the few I could find that take multiple command line arguments and does something sensible with it. [131054590040] |I'm sure one could remove the sort and the cut in favor of some more awk code. [131054590050] |To be completely honest, I'm not sure how this solution will interact with whitespace. [131054600010] |Which Linux distributions have highest install-base as of mid-2010? [131054600020] |Which Linux distributions have the most installed machines? [131054600030] |What is the distribution in terms of distro, architecture, and version? e.g. [131054600040] |Ubuntu vs. RHEL vs. SUSE vs. Fedora vs. CentOS vs. Arch vs. ... [131054600050] |Ubuntu 8.04 vs. Ubuntu 9.04 vs. Ubuntu 10.04 vs. RHEL 3 vs. RHEL 4 vs. RHEL 5 vs. ... amd64 vs. i386 vs. ppc vs. ppc64 vs. ARM vs. ... [131054610010] |I guess is Ubuntu for desktop, and CentOS and Ubuntu for servers [131054620010] |For servers, I think Ubuntu is the vast majority. [131054620020] |For example, if you look at Linode's stats, 48% of all VPS deployments are Ubuntu, and the next closest competitor is Debian with 24%. [131054620030] |CentOS isn't nearly as popular as others seem to think. [131054620040] |Given this is only one VPS provider, but I think it's probably pretty indicative of real-world scenarios. [131054620050] |As far as desktop deployments go, I can't provide any verifiable numbers... but I think most people would agree it's most likely also Ubuntu. [131054620060] |EDIT: actually I found the Linux Journal Reader's Choice Awards for 2009, and they indicate that 45% of readers choose Ubuntu. [131054620070] |Take it as you will. [131054630010] |Comprehensive data on this topic is available from IDC, a large market-research firm that sells reports. [131054630020] |As was mentioned earlier by Stefan Lasiewski and in the original question, there are a number of different ways to slice and dice server/desktop and raw operating system data. [131054630030] |A complicating factor for Linux in particular is the number of paid versus unpaid subscriptions in the market. [131054630040] |Consequently, you are likely to find that market estimations will have a wide margin-of-error. [131054630050] |Given these facts, I would say that this question is unanswerable without further detail or specificity. [131054630060] |In addition, all answers would be subject to debate on the rationale and methodology. [131054640010] |Can there be multiple kernels executing at the same time? [131054640020] |I know that Linux OS's are typically multi-programmed, which means that multiple processes can be active at the same time. [131054640030] |Can there be multiple kernels executing at the same time? [131054650010] |Sort of. Check out User-mode Linux. [131054660010] |With most virtualization solutions (xen, virtualbox, vmware and the likes), you certainly have multiple kernels running at the same time on a single machine. [131054670010] |Is it possible to support multiple processes without support for Virtual memory? [131054670020] |Is it possible to support multiple processes without support for virtual memory? [131054670030] |I would like to know more about it if so. [131054680010] |This depends on how you define process vs threads in terms of memory. [131054680020] |One of the functions of virtual memory is partitioning. [131054680030] |While it is possible to run multiple processes without any partitioning, this would be more like running multiple threads than processes - sharing the same address space. [131054690010] |It is certainly possible with some constraints like memory protection which would be an issue as already stated. [131054690020] |For example µClinux http://www.uclinux.org/ supports multiple processes without implementing virtual memory. [131054690030] |Note that some CPUs like at least the Analog Devices Blackfin do provide a MPU (Memory Protection Unit) http://docs.blackfin.uclinux.org/doku.php?id=bfin:mpu . [131054690040] |This allows virtual memory less operating systems to still allow memory to be partitioned. [131054700010] |You can run a multi-process operating system even with no hardware support (no MMU), with all pointers representing a physical address. [131054700020] |You do however lose several key features usually provided through the MMU: [131054700030] |
  • Since a pointer always points to a specific place in RAM, you can't have swap (or only in a very limited way). [131054700040] |Normally, the MMU raises an exception when it can't find a physical page for a given virtual address, and the OS-provided exception handler fetches the page from swap.
  • [131054700050] |
  • Since a pointer is dereferenced with no check, every process can access other processes's memory, and the kernel memory. [131054700060] |Normally, the MMU raises an exception when it can't find a physical page for a given virtual address, and the OS-provided exception handler terminates the process for attempting an invalid access.
  • [131054700070] |
  • Since the same pointer has the same meaning in different processes, you can't easily implement fork. [131054700080] |Normally, the effect of fork is to make a copy¹ of the process's physical memory, and create a new virtual memory map from the same virtual addresses to the new physical addresses.
  • [131054700090] |There are unix-like operating systems that work on systems with no MMU. [131054700100] |
  • MINIX is a unix variant originally developed by Andrew Tanenbaum as a companion to his book Operating Systems: Design and Implementation. [131054700110] |The original versions ran on the PCs of the time, which couldn't support virtual memory. [131054700120] |(Given your interests, I recommend reading this book, even an older edition if that's all you can afford.)
  • [131054700130] |
  • µCLinux is a variant of Linux for microcontrollers without an MMU. µCLinux's limitations include not implementing a general fork (only vfork is supported), and the absence of memory protection; but there is preemptive multitasking.
  • [131054700140] |¹ In modern unices, this is usually done lazily (copy-on-write), which again relies on the MMU raising an exception when it can't find a physical page. [131054710010] |Segmentation fault when trying to run glxgears via virtualGL [131054710020] |(Follow-up on How to efficiently use 3D via a remote connection?) [131054710030] |I installed the amd64 package on the server and the i386 one on the client. [131054710040] |Following the user's guide I run this on the client: [131054710050] |This causes a segfault, using vglconnect -s for a ssh tunnel doesn't work either. [131054710060] |I also tried the TurboVNC method, where starting vglrun glxgears works, but I'd prefer transmitting only the application window using the jpeg compression. [131054710070] |Is the problem 32 <-> 64 bit? [131054710080] |Or how can I fix things? [131054720010] |conky: proper column alignment [131054720020] |Say I want something like the following in my .conkyrc [131054720030] |Do I have to align the columns manually by adding space, or is there a way to tell conky to align things in columns. [131054720040] |With fewer columns, I could just use $alignc and $alignr but I can't do that here... [131054730010] |As long as you stick to left-aligned columns or a non-proportional font, ${goto N} works. [131054730020] |For right alignment, you can try playing with alignr and offset. [131054740010] |How is paging managed in the absence of swapping? [131054740020] |How is paging managed in the absence of swapping. [131054740030] |If that is the case, how will a page fault be managed? [131054740040] |What I meant is, if there is no availability for swapping, then how is Paging managed. [131054740050] |I know that there will be two list of pages - free_pages list and allocated_pages list. [131054740060] |When the pages in the free_pages list becomes low, it will move the LRU pages from the allocated_pages list to the swap partition. [131054740070] |I just want to know what will happen if there is no swap partition. [131054750010] |Swapping, allows one to move unused pages out from memory and onto a disk. [131054750020] |However, it is not essential, to the actual paging operation, which will happen even if there is no swap. [131054760010] |If I understand correctly your question, you are asking how pagination occurs if no swap space is defined. [131054760020] |With OSes systems using the traditional Unix approach, virtual memory is always backed by physical RAM so no page faults can occur because of this missing swap area, outside of course unrelated cases like memory mapped files or bogus pointers. [131054760030] |With overcommitting OS like Linux, there is no strict reservation happening so the first access to an unmapped page while no more RAM is available would trigger the OOM killer or some other destructive/error event. [131054770010] |As far as I understand your question it happens usually in such way: [131054770020] |If you allocate memory: [131054770030] |
  • Mark memory as allocated but don't allocate physical memory (hence on access there will be page fault). [131054770040] |In Linux it stops at this stage but it is possible that system may allocate physical space immediately - then it performs similar algorithm at the end as on page fault except that the OOM will not happen.
  • [131054770050] |If there is page fault (accessing not mapped page) [131054770060] |
  • Check if memory is allocated, if not return error.
  • [131054770070] |
  • Check if there is free physical page. [131054770080] |If there is goto 5
  • [131054770090] |
  • Check if there is part that can be written back to disk (like file from cache) or if there is free space on swap (if there is no swap consider it as swap of size 0). [131054770100] |If there is write file/block back to disk or write page to disk, then unmap it and goto 5. [131054770110] |If both are possible choose any.
  • [131054770120] |
  • Return OOM condition. [131054770130] |It depends on kernel what happens - it may display error to user, kernel panic/blue screen, find some process to kill etc.
  • [131054770140] |
  • Map the page that caused the problem to freed page. [131054770150] |If page was swapped read page from swap and put it in page. [131054770160] |If page is backed by file read file and put content there.
  • [131054770170] |In general you may say that no swap is equivalent to full swap. [131054780010] |What's the difference between fork() and vfork()? [131054780020] |I would like to understand in detail the difference between fork() and vfork(). [131054780030] |I was not able to digest the man page completely. [131054780040] |I would also like to clarify one of my colleagues comment "In current Linux, there is no vfork(), even if you call it, it will internally call fork()." [131054790010] |Man pages are usually terse reference documents. [131054790020] |Wikipedia is a better place to turn to for conceptual explanations. [131054790030] |Fork duplicates a process: it creates a child process which is almost identical to the parent process (the most obvious difference is that the new process has a different process ID). [131054790040] |In particular, fork (conceptually) must copy all the parent process's memory. [131054790050] |As this is rather costly, vfork was invented to handle a common special case where the copy is not necessary. [131054790060] |Often, the first thing the child process does is to load a new program image, so this is what happens: [131054790070] |The execve call loads a new executable program, and this replaces the process's code and data memory by the code of the new executable and a fresh data memory. [131054790080] |So the whole memory copy created by fork was all for nothing. [131054790090] |Thus the vfork call was invented. [131054790100] |It does not make a copy of the memory. [131054790110] |Therefore vfork is cheap, but it's hard to use since you have to make sure you don't access any of the process's stack or heap space in the child process. [131054790120] |Note that even reading could be a problem, because the parent process keeps executing. [131054790130] |For example, this code is broken (it may or may not work depending on whether the child or the parent gets a time slice first): [131054790140] |Since the invention of vfork, better optimizations have been invented. [131054790150] |Most modern systems, including Linux, use a form of copy-on-write, where the pages in the process memory are not copied at the time of the fork call, but later when the parent or child first writes to the page. [131054790160] |That is, each page starts out as shared, and remains shared until either process writes to that page; the process that writes gets a new physical page (with the same virtual address). [131054790170] |Copy-on-write makes vfork mostly useless, since fork won't make any copy in the cases where vfork would be usable. [131054790180] |Linux does retain vfork. [131054790190] |The fork system call must still make a copy of the process's virtual memory table, even if it doesn't copy the actual memory; vfork doesn't even need to do this. [131054790200] |The performance improvement is negligible in most applications. [131054800010] |power management of USB-enclosed hard drives [131054800020] |With a typical USB hard drive enclosure, is the full range of drive power management functionality available? [131054800030] |edit: moved this into a separate question: [131054800040] |In what may be an unrelated matter: is it possible to suspend a PC without unmounting an attached USB-powered drive, and then remounting it on resume? [131054800050] |This is the behaviour I'm currently seeing (running Ubuntu linux 10.10). [131054800060] |/edit [131054800070] |Are there certain models or brands that provide more complete control over this aspect of drive operation? [131054800080] |My Friendly Neighbourhood Computer Store carries (part of) the Vantec Nexstar product line. [131054810010] |Count the number of words of particular length from a file [131054810020] |I've used the below command to count the number of words in a file: [131054810030] |This returns the word list with counter. [131054810040] |Now I want to count the number of words of a particular length. [131054810050] |For example, given a file with these contents: [131054810060] |When I run the shell script with argument 6, it should display/count words with 6 characters (paresh, jayesh, and hitesh, with count 3). [131054810070] |How can I do that? [131054820010] |If you grep for the regular expression ^.{6}$ it will return lines with exactly six characters: [131054830010] |Also, [131054830020] |or, with tr [131054840010] |Install Ubuntu from a small USB stick [131054840020] |I'd like to install Ubuntu (Desktop or netbook edition, preferably latest version), onto a laptop using a small USB stick. [131054840030] |(480 MB free space.) [131054840040] |How can I do this? [131054850010] |There is a dedicated article on this: [131054850020] |https://help.ubuntu.com/community/Installation/FromUSBStick [131054850030] |In brief: [131054850040] |
  • Download the ISO.
  • [131054850050] |
  • Download UnetBootin http://unetbootin.sourceforge.net/
  • [131054850060] |
  • Burn the ISO to your USB using UnetBootin. [131054850070] |Your USB will become a liveUSB from which you can boot.
  • [131054850080] |
  • Boot the system using USB and choose Install.
  • [131054860010] |Hi, [131054860020] |You can install a command line (CLI) Ubuntu using the Alternate Install or Mini iso (which should fit on your 512mb stick) [131054860030] |Then install Crunchbang using the alternate method: http://crunchbanglinux.org/forums/topic/26/crunchbang-linux-81001-alternative-installation/ [131054860040] |You could also try the Lite version of CrunchBang, which is less than 512mb. [131054860050] |You could use Unetbootin to prepare the USB stick. [131054870010] |Having trouble reinstalling Grub from live CD [131054870020] |I have a 64-bit Sabayon Linux and Mac OS X dual boot. [131054870030] |After using gparted to resize my partitions, I can no longer boot Sabayon. [131054870040] |I've tried chrooting and fixing grub from several live CDs, but they have all said "command not found" about either grub or chroot. [131054870050] |Knoppix came close but then said it couldn't restore grub because it's 32 bits, rather than 64 like my Sabayon installation. [131054870060] |Tips? [131054880010] |Try archlinux live discs, they surely have chroot working (repaired my systems several times with them). [131054880020] |Find the ISOs here [131054880030] |I reccomend "dual", as they have 32 and 64 bit versions of archlinux. [131054890010] |Compared to other filesystems why is ZFS so slow, with incoming data? [131054890020] |Ok. [131054890030] |I get about 1/3 of the speed on the same drives when using ZFS. [131054890040] |Now, I understand that ZFS is self-healing and that everything is also stored with its hashes. [131054890050] |This makes it safer to use. [131054890060] |I haven't used raidz yet, so this excludes that part of the equation - but please tell me if by using raidz I can speed things up. [131054890070] |If the hashes are computed for comparison - where and when does this happen. [131054890080] |Where does the speed issue come into play? [131054890090] |I'm happy now that I have mirrored and safe data, on my system, but would like to know how I can tweak performance. [131054900010] |In case of FUSE and ZFS, performance is awful, and that's understandable. [131054900020] |FreeBSD tuning guide gives some pointers on this. [131054900030] |Article is long, and there is many specific points for different architectures, so I won't copy it to here. [131054900040] |Most important point is that you need lots of memory, one gigabyte being minimum. [131054900050] |raidz only helps if disk IO performance is your bottleneck. [131054900060] |If it's software bugs, processor or memory, it's not going to be better with that. [131054910010] |How to reuse Physical memory in the absence of swapping? [131054910020] |How can I reuse physical memory in the absence of swapping? [131054920010] |Just don't ? [131054920020] |Let the kernel do its job, it knows better ? [131054930010] |Why should a child of a vfork or fork call _exit() instead of exit() ? [131054930020] |From the man page of vfork(): [131054930030] |vfork() differs from fork() in that the parent is suspended until the child makes a call to execve(2) or _exit(2). [131054930040] |The child shares all memory with its parent, including the stack, until execve() is issued by the child. [131054930050] |The child must not return from the current function or call exit(), but may call _exit(). [131054930060] |Why should the child use an _exit() rather than simply calling exit()? [131054930070] |I hope this is applicable to both vfork() and fork(). [131054940010] |exit do additional cleanup like calling functions registered by atexit hence it access data outside copied part. _exit performs syscall directly w/out any cleanup except in-kernel. [131054950010] |As seen earlier, vfork does not allow the child process to access the parent's memory. exit is a C library function (that's why it's often written as exit(3)). [131054950020] |It performs various cleanup tasks such as flushing and closing C streams (the files open through functions declared in stdio.h) and executing user-specified functions registered with atexit. [131054950030] |All these tasks involve reading and writing to the process memory. [131054950040] |_exit exits without cleanup. [131054950050] |It's directly a system call (which is why it's written as _exit(2)), typically implemented by placing the system call number in a processor register and executing a particular processor instruction (branching to the system call handler). [131054950060] |This doesn't need to access the process memory, so it's safe to do after vfork. [131054950070] |After fork, there is no such restriction: the parent and child process are now completely autonomous. [131054960010] |You have the child call _exit() to avoid flushing stdio (or other) buffers when the child process exits. [131054960020] |Since the child process constitutes an exact copy of the parent process, the child process still has whatever the parent had in "stdout" or "stderr", the buffers from . [131054960030] |You can (and will, at inopportune times) get double outputs by calling exit(), one from the child process' atexit handlers, and one from the parent, when the buffers in the parent process get full, and get flushed. [131054960040] |I realize the above answer concentrates on stdio.h specifics, but that idea probably carries over to other buffered I/O, just as one of the answers above indicates. [131054970010] |Command Line Completion From History [131054970020] |So, I've looked at history and at Ctrl+R, but they are not what I thought I knew. [131054970030] |Is there a way that I can type in the beginning of a command, and cycle through the matches in my history with some bash shortcut? [131054970040] |Gives me: [131054980010] |Pressing Ctrl+R will open the reverse history search. [131054980020] |Now start typing your command, this will give the first match. [131054980030] |By pressing Ctrl+R again (and again) you can cycle through the history. [131054980040] |Would give: [131054980050] |Ctrl+R again: [131054990010] |You can use the readline commands history-search-backward and history-search-forward to navigate between commands lines beginning with the prefix you've already typed. [131054990020] |Neither of these commands are bound to keys in the default configuration. [131054990030] |Zsh (zle) has similar commands history-beginning-search-backward and history-beginning-search-forward, also not bound to keys by default. [131054990040] |There are also history-search-backward and history-search-forward, which uses the first word of the current command as the prefix to search regardless of the cursor position. [131055000010] |Under Ubuntu, how do I set a static IP for firewire? [131055000020] |I am using Ubuntu 10.10 on my laptop, which connects to our network wirelessly. [131055000030] |Since it sits on my desk next to my desktop, I have a private network between the two using a firewire cable, because synergy and file copies are much more plesant over firewire than wifi. [131055000040] |I want to set a static IP for the firewire device on boot so I don't have to keep using ifconfig each time. [131055000050] |However the device doesn't appear in Gnome's NetworkManager. [131055000060] |How can I set a static IP for the firewire device? [131055000070] |Terminal commands are fine, as long as NetworkManager won't blow the config away. [131055010010] |On Debian and Ubuntu, the place to configure networking without Network Manager is /etc/network/interfaces. [131055010020] |Something like this should work (you may need to change the interface number): [131055010030] |Run ifup eth2 and ifup eth2 to bring the interface up or down. [131055010040] |The auto statement causes the interface to be brought up as part of the boot process. [131055010050] |Network Manager won't touch an interface mentioned in /etc/network/interfaces. [131055010060] |Zeroconf is often nice for Firewire links: if you run it at both ends, it automatically negociates addresses and routing. [131055010070] |However it's no help if you want a more managed network (e.g. to give your laptop a name, bring it inside a casual firewall, …). [131055020010] |Intercept "command not found" error in zsh [131055020020] |Is there a way to intercept the "command not found" error in ZSH? [131055020030] |I've seen this is possible in bash apparently, but I couldn't find anything about doing it in zsh. [131055030010] |There is; it's the same as in bash, you make a function named command_not_found_handler. [131055030020] |It'll be passed all the arguments that were given in the shell [131055040010] |USB Ubuntu with whole-disk encryption [131055040020] |Is it possible to create a single-user USB installation (with persistence) of Ubuntu Linux such that the entire USB stick is encrypted and requires a passphrase at boot time? [131055040030] |Is there an online tutorial for this? [131055050010] |It should be straightforward to make a persistent installation directly on a USB stick, as if it was an internal disk. [131055050020] |Plug in your Ubuntu installation media (I recommend not putting it on the same stick, so that the two are bootable separately), your USB stick, and point the installer to the stick. [131055050030] |The server installer (alternate CD) supports creating and installing to an encrypted partition (with dm-crypt). [131055060010] |Adding efficient storage to a laptop-based system [131055060020] |I've recently asked an as-yet-unanswered related question. [131055060030] |My question here is a bit different: I want to know the best way to add storage to a laptop-based system without sacrificing power efficiency or "material efficiency". [131055060040] |By the last, I mean anything which would cause the storage elements to deteriorate more rapidly than necessary. [131055060050] |The main example I can think of is if spindown happens too infrequently, or perhaps also too frequently, the lifespan of the drive in question may be reduced. [131055060060] |aside [131055060070] |Currently I'm trying to figure out how to get an HP Simplesave 2.5" drive to behave reasonably well. [131055060080] |Obviously this is not the optimal choice; it was a boxing week sale which may end up being taken back. [131055060090] |Various tests on it using hdparm seem to indicate that it's not using the value of the -S parameter, which is supposed to determine how long it waits to spin down. [131055060100] |Instead, it seems to spin down after 10 seconds if the -B option is set to 127 or less, and after a long period of time, or perhaps not at all, if it is higher. [131055060110] |I mention this mainly because I'm not entirely convinced that these directives will tend to work as man hdparm says they should even on drives mounted in proper USB enclosures. [131055060120] |The manual does mention that newer enclosures tend to support these features. [131055060130] |main point [131055060140] |The system tends to function mostly as a server, though does see occasional use as a media box. [131055060150] |It may get suspended from time to time, or may end up being powered on and active (though idle) for months. [131055060160] |I'd like to have available the same sort of power management functionality that I would get with an internally-mounted HDD in a desktop (or rack-mounted?) server. [131055060170] |This includes spinning down and/or sleeping the drive when the server is suspended. [131055060180] |So I'd like to know the best approach to finding an efficient storage solution. [131055060190] |Are certain brands of enclosures better for this? [131055060200] |Is it necessary to use a 3.5" enclosure, or at least one that is powered separately from the USB line, to get proper power management facilities? [131055060210] |Or would I have use a full NAS system for that? [131055070010] |Need a programmers advice on *X display manager, window manager and composit manager combination [131055070020] |First of all: I asked this question on SuperUser, when I wasn't thinking about a StackExchange Site for Linux-related questions. [131055070030] |So if this violates any rules please feel free to close. [131055070040] |I have fought with myself whether or not i should ask this question but I find myself stuck and I need another expert opinion. [131055070050] |I can't seem find the right combination of display and window manager (and composit manager). [131055070060] |I have tried some different combinations but most of them don't work for me. [131055070070] |I have been working with Linux for a few years now and currently I'm running Gentoo with GDM, Openbox(stand alone, Gnome aware) and xcompmgr. [131055070080] |But I have tried Metacity, Awesome and Fluxbox with and without Compiz, but always with GDM. [131055070090] |What I want: A lightweight, HIGHLY configurable environment that doesn't rely on mouse-input too much (except for web browsing and image processing). [131055070100] |At 95% I work programming or so with multiple consoles and desktops on multiple screens. [131055070110] |What makes me ask is that most lightweight environments seem somewhat "unfinished" and show unexpected behavior quite often and that doesn't make me feel too good as I want an environment thats stable. [131055070120] |And of course I want an environment which is not TOO ugly to look at as I use it at an average of 10 hours a day. :) Any thoughts? [131055070130] |What do you use in a similar situation? [131055070140] |Thanks for any advice! [131055070150] |(At SuperUser I was told to try XFCE. [131055070160] |I am doing that right now.) [131055070170] |Greetings [131055080010] |I know many people use XMonad. [131055080020] |It is highly configurable and scriptable, it integrates with GNOME etc. [131055080030] |The only 'disadvantage' is that it uses Haskell, a beatyful but not so popular purly functional language. [131055090010] |If you want to go mouseless, you should try a tilling wm. [131055090020] |Personally, my favorite is Awesome, but there are plenty in that Question. [131055090030] |As for a composite manager, xcompmgr has already been mentioned, but Cairo Composite Manager (CCM) seems nice too, although I find it less stable still. [131055090040] |As always, YMMV. [131055100010] |Which systems have 'pager' shortcut/alias? [131055100020] |On a Debian system, one can type pager in order to use whatever pager program happens to be default/available. [131055100030] |By default, less is used, and if not available, the lesser more gets to do the job. [131055100040] |Is such a thing available in other Unix and Linux systems? [131055110010] |All Linux-Distributions I have used so far (Gentoo, Debian, Slackware, Fedora, OpenSuse) had an Environment-Variable called PAGER which set the pager (default, as said, less). [131055110020] |It's set in your shell environment. [131055110030] |I think the command man uses this variable.. [131055120010] |YMMV depending on what you have installed, but this is Debian-specific (well, and derivatives too). [131055120020] |Customarily one uses $PAGER with a fallback to more. [131055130010] |The unix tradition is for applications that want to call a pager to call $PAGER, i.e. use the contents of the environment variable PAGER as a command name. [131055130020] |(Whether shell metacharacters are expanded in $PAGER is not consistent between applications.) [131055130030] |The unix tradition further uses more if the PAGER variable is not set. [131055130040] |There is a similar tradition for text editors: use $EDITOR (or, for historical reasons, $VISUAL), falling back to vi. [131055130050] |Having a command named pager is specific to Debian (and derivatives, including Ubuntu). /usr/bin/pager is in fact a symbolic link to /etc/alternatives/pager, which points to the the “best” available pager (the Debian maintainers decide which is best, and the system administrator can override their choice), using the alternatives framework. [131055130060] |Debian also provides /usr/bin/sensible-pager. [131055130070] |This script runs $PAGER if the variable is set, and falls back to pager otherwise. [131055130080] |Its purpose is to be used in programs where a single pager path has to be hard-coded. [131055130090] |This behavior is documented in the Debian policy manual. [131055140010] |How to avoid powering down certain USB devices when a machine is suspended [131055140020] |I'd like to maintain power supply to a USB-powered drive when the system goes into suspend (AKA S3 or "suspend-to-RAM") mode. [131055140030] |Normally the power is cut while the machine is suspended, which causes it to be dismounted and then remounted when the system is resumed. [131055140040] |This is not really great, especially if the drive itself supports power management. [131055140050] |Although I could run it on a separate power supply, I'd prefer to have to avoid allocating more wiring for something that can, at least in theory, be done with my existing hardware. [131055140060] |How can I determine if it is possible to do this with my system, and how can I arrange for a particular USB device, i.e. this enclosure, to always be treated this way? [131055140070] |I'm running ubuntu 10.10. [131055140080] |update [131055140090] |Discovered this ubuntuforums thread which suggests to use acpitool -w to determine the available wake-up level for the USB controller. [131055140100] |Running this on my system shows S1 for the USB controllers: [131055140110] |Which seems to be telling me that wake-up capability can only be enabled for USB in the S1 state. [131055140120] |I'm not sure how useful this is, since providing full power and allowing wake-up may be orthogonal concerns. [131055140130] |It may be enabling wake-up only provides low power, so there may be a different way to enable full power. [131055140140] |If turning on wake-up does equate to providing full power, it looks like I may be able to do what I want by putting a USB card in the expansion slot (I guess that's what PCIE is?). [131055140150] |Though I think I'd want to know a bit more about this before attempting to excavate a USB2 PC card. [131055150010] |Imitating Multifunction Printers [131055150020] |It is difficult to find a good multifunction LASER in my price range. [131055150030] |But there are several good cheap laser printers (non-multifunction) and I have an excellent scanner (better than I could get on a multifunction). [131055150040] |But making copies would be a pain if I had to operate the printer and scanner seperately. [131055150050] |Is there a program or script that can imitate the copier &/or fax functions of a good multifunction printer by automating the scan and passing it to the printer? [131055160010] |check scanadf [131055170010] |How to stop Mod4-P from switching the display? [131055170020] |I use Ctrl-P very frequently to scroll backward in the command history, but I often mistype it as Mod4-P, which is bound to the switch display function. [131055170030] |I've searched around Keyboard shortcuts and CompizConfig, etc., but I couldn't find where Mod4-P is bound. [131055170040] |What controls that? [131055180010] |Since the same key has bugged me both in Windows (inconveniently switching out of games at the wrong time), and Linux, I have levered off the Keycap itself so I will never accidentally hit it. [131055180020] |However, for a less extreme remedy, you should be able to use xmodmap - (oldish) man page http://manpages.ubuntu.com/manpages/hardy/man1/xmodmap.1.html [131055190010] |I know it's the same questions (and person asking), but as I was looking for the same answer I thought that crosslinking to solution can be useful for other people: [131055190020] |http://askubuntu.com/questions/20113/how-to-stop-mod4-p-from-switching-the-display/20273#20273 [131055200010] |Removing abstraction from Ubuntu boot process [131055200020] |I have been using Linux after almost 5 years and observed that boot process has been almost abstracted. [131055200030] |I mean, not much is visible to the user what is going on behind the scenes (Due to splash screens etc). [131055200040] |Now, this might be good for the end users but not for the geek :) [131055200050] |I want to bring back the verboseness of old times. [131055200060] |Here is what I have done: [131055200070] |I have been able to get rid of some of it by removing the "splash" and "quiet" parameters from the command line. [131055200080] |However, I still cannot see the services being started one by one (like the ones in init.d). [131055200090] |I assume its because of init daemon being replaced by upstart. [131055200100] |Are there some configs file which I can tweak to bring back the verboseness of what is going on. [131055200110] |Also, as soon as the login screen comes, it erases the boot log history. [131055200120] |Is there a way to disable that ? [131055200130] |Note: I know I can do that by simply switching the distro to Arch or Slackware. [131055200140] |But I don't want to do that. [131055210010] |plymouth handles Ubuntu's splash screen. /usr/share/doc/plymouth/README.Debian explains how to remove it: [131055210020] |Note that you have to run update-grub after the second method. [131055210030] |plymouth is also responsible for /var/log/boot.log. [131055210040] |More boot messages are available over dmesg. [131055220010] |You can pass --verbose on the kernel command line (replacing quiet splash) to make upstart more verbose. [131055220020] |See Upstart debugging. [131055220030] |You can put console output in the global configuration file /etc/init.conf so that every job has its stdout and stderr connected to the console (by default, they're connected to /dev/null). [131055220040] |(I'm not sure whether this in fact works; /etc/init.conf is not actually documented, I haven't tested if it's read in this way and this thread is not conclusive. [131055220050] |Please test and report.) [131055220060] |This directive can go into individual jobs' descriptions (/etc/init/*.conf) if you want to be selective (some already have it). [131055230010] |How to assign correct permissions to both webserver and svn ? [131055230020] |I've an issue with files ownerships in unix. [131055230030] |I have a drupal website and the "files" folder needs to be owned by "www-data" in order to let the users to upload files with php. [131055230040] |However I'm now using svn and I need all folders and files to be own by "svnuser" in order to work. [131055230050] |So now, I guess I need to add both users to a group with proper permissions. [131055230060] |I'm not sure what exactly to do, could you tell me what are the exact necessary steps ? [131055230070] |thanks [131055240010] |The easiest way to manage this is with access control lists. [131055240020] |They allow permissions to be set for as many users and groups as you want, not just one user and one group like the basic unix permissions. [131055240030] |ACLs need to be enabled on the filesystem. [131055240040] |With ext[234] or reiserfs, you need to pass the acl mount option. [131055240050] |Also make sure you have the ACL utilities installed (acl package on Debian or Ubuntu). [131055240060] |Set an ACL that allows both users to access the files, and set a matching default ACL on directories (the default ACL is inherited by files created in the directory). [131055240070] |You can set different permissions if you like. [131055240080] |The executable bit will be ignored if the file is not made executable through the non-ACL permissions (the ones you set with chmod). [131055240090] |The commands given are for Linux. [131055240100] |Many other unix variants support ACLs, but the exact set of available permissions and the utility to set them are not standardized. [131055240110] |You can use groups to control access if you want. [131055240120] |Even if you do, ACL have the advantage that you won't run into a umask issue: if you just create a group, you have to ensure that all files and directories are group-writable, which means you have to make sure any process creating a file has a umask of 002 or 007, which in turn may cause permissions elsewhere to be more liberal. [131055240130] |So even if you create a group, ACLs are useful. [131055240140] |Note that I make no warranty as to the suitability of this security model to your use case. [131055240150] |I'm just providing an implementation. [131055250010] |Extlinux boot error [131055250020] |In other question extlinux have been recommended. [131055250030] |It displays meaningful message "Boot error". [131055250040] |Any idea why it may not work or how to debug the problem? [131055250050] |Disk layout: on /dev/sda rEFIt is installed. /dev/sda4 is / and there is no separate /boot partition. [131055250060] |Method of installation: [131055250070] |extlinux.cfg [131055260010] |Collect files using find [131055260020] |I currently have this script: [131055260030] |To collect all the "*.log" files. [131055260040] |I would like to modify it to include also all the ".txt" files but I don't know how, this should be fairly simple right? [131055270010] |If you use the -regex flag instead of -name you can specify the name as a regular expression: [131055280010] |There are a number of ways to change your find command. [131055280020] |Here is one: [131055280030] |The -o is an OR operator, thus if the first condition fails, the second condition is checked. [131055280040] |If you have GNU find, you can also use the aptly named -or. [131055280050] |Note that the second expression is not evaluated if the first holds. [131055290010] |best way to search my shell's history [131055290020] |Is there a better way to search my history file for a command than grep? [131055290030] |I do have some idea what the command starts as, but I don't know how far back in the history it is. [131055290040] |update: was formerly zsh specific but due to overlapping answers feel free to answer for any shell (or mode (vi/emacs)) here, just note if it is specific. [131055300010] |Not sure if this will help, but I like to use ctrl-r and then start typing the command. [131055300020] |It should go through your history and present you with the first match. [131055300030] |To toggle through the matches just hit ctrl-r again. [131055310010] |Ctrl+R is usually the best way, as descriptor said. [131055310020] |You can also use !string, which runs the most recent command starting with string, or !?string?, which runs the most recent command that contains string. [131055310030] |(I think that's the only stuff relevant to this question, but I covered much more of the history commands in this answer) [131055320010] |might be zsh specific. [131055320020] |In researching this answer I found vi-history-search-backward which allows one to search with / in vi mode (which I didn't mention I was in) you can then use n and N to search forward and backward respectively (after pressing enter). [131055330010] |I use following two most of the time. [131055330020] |But please note both of them search for starting chars. [131055330030] |
  • tcsh: type some few charecters of the command then Esc+p it will show the previous command starting with that key sequence press it again it will show another command.
  • [131055330040] |
  • ksh : use fc -e - this will run last command matching that sequence.
  • [131055330050] |I was not able to find something like Ctrl+R in Bash for my ksh and tcsh. [131055340010] |When using Bash, type Control-R and then start typing. [131055340020] |Typing Control-R repeatedly after inputting some text will move you back in matching command lines. [131055350010] |The interactive prompt invoked by Ctrl-R, which had been already mentioned, is the most convenient way. [131055350020] |Additionally it's popular to remap Up and Down arrow keys to search the history for a prefix currently on the command line, however this requires some changes to the ~/.inputrc file. [131055350030] |The classic and less interactive solution is to use shell history expansion. [131055350040] |Typing [131055350050] |will execute the last command that starts with foo and [131055350060] |will execute the last command that contains foo. [131055350070] |More information about history expansion can be found in the Bash reference manual, assuming your shell is Bash or compatible. [131055360010] |Yes it exists, at least if you are using zsh. [131055360020] |Just press M-p (ESC p) and you will get exactly what you are looking for. [131055370010] |If you type !ls then bash will look through your history and execute the most recent command that begins with "ls" [131055380010] |If you add stty ixon to your ~/.bashrc file then Ctrl-s will go in the opposite direction to Ctrl-r. [131055390010] |If you add the following to ~/.inputrc [131055390020] |and restart Bash, you can start typing, and then use Page Up and Page Down to move backward and forward through the history of lines starting with the same prefix you just typed. [131055390030] |(Some may prefer to use \e[A and \e[B, which will replace the normal Up and Down actions instead of binding new keys.) [131055400010] |There is a way to use Ctrl-R with tcsh. [131055400020] |Add the following in your ~/.cshrc: [131055400030] |bindkey "^r" i-search-back (it is just a caret sign and the letter r) [131055400040] |That's the biggest time saver for me.