[131057020010] |How to get infrared image data with sane on an epson v700 scanner? [131057020020] |I have read a number of posts on the topic and it seems that for now it is not possible to use sane to read infrared data (second pass) with my Epson Perfection V700 scanner out of the box. [131057020030] |But are there any options to get that working? [131057020040] |Any known patch? [131057020050] |Any configuration behind the scenes? [131057020060] |Any undocumented feature? [131057020070] |I just asked the question on Avasys' message board, but if any of you guys out there had relevant information, I would appreciate greatly. [131057020080] |Of particular interest is the 6400dpi resolution in 16bits per channel to scan slides, so I need the infrared option on the epkowa driver as the epson2 driver seems limited to 3200dpi anyway. [131057020090] |As far as I can tell neither epkowa nor epson2 offers the infrared scan, though it seems that it should be a simple option (vuescan actually can do the job, it does two scan passes, one for the RGB and one for the infrared, but I would much prefer to be able to do my scans from the command-line). [131057020100] |Note that I don't mind if the infrared scan comes out as a separate image, I can manage to combine them as needed. [131057020110] |Also of interest and not included in the current sane-epkowa driver are: 1/ scans with multiple samples 2/ confirmation that the scanimage -brightness setting actually changes the CCD exposure time (rather than performing software image processing) [131057020120] |Edit 2011-02-01: check sane-devel for the start of an answer. [131057020130] |sane-backends v1.0.21 has code that's unused by default. [131057020140] |A tweak in epson2.c to enable my scanner (GT-X900) and a configure option to enable IR (CPPFLAGS=-DSANE_FRAME_IR) gets me a version that can at least pretend to do the job. [131057020150] |I still have to solve two significant issues before I can call it a victory: 1/ with --mode=Infrared, the output image format seems incorrect, it seems like it produces grayscale, albeit in a funny format. [131057020160] |2/ --mode=Infrared does not actually produce IR data, the data looks like Grayscale from ordinary colour, it does not look like the result of an Infrared scan. [131057020170] |More tweaking will be necessary before I get the desired result. [131057020180] |Again, if any of you has any idea that would help debug and fix (such as where to start in the code, debug settings...), please shout. [131057030010] |How to have the Bookmarks toolbar show up on Firefox 4? [131057030020] |I've upgraded Firefox from 3 to 4 beta8. [131057030030] |My Bookmarks toolbar has disappeared and I can't find the option to make it appear again. [131057040010] |I've since learned of 2 ways to do this: [131057040020] |
  • View menu -> Toolbars -> Bookmarks Toolbar
  • [131057040030] |
  • Right-click on toolbar -> Bookmarks Toolbar
  • [131057050010] |scp, globbing, and different shells [131057050020] |The other day at work I tried doing [131057050030] |and I received an error about the *, *.txt file not found sorry, not at work and I forget the exact error [131057050040] |on my workstation I run zsh 4.3. on the remotehost bash is the default shell, and the version of zsh is older (4.2 vs 4.3) on that. [131057050050] |I then tried switching to bash on my workstation, and doing the exact same command. [131057050060] |This time it worked. [131057050070] |What is the root 'cause of this. [131057050080] |Is there anyway to do globbing, or wildcards between these 2 systems (without switching to bash)? [131057060010] |zsh's behavior is a little different here than most other shells. [131057060020] |Other shells, like bash, try to expand the wildcards. [131057060030] |If they cannot expand to anything they pass the literal string (containing the wildcards) to the application instead. [131057060040] |But zsh does not do that (well, there is an option for that, to do it or not). [131057060050] |The zsh will print that error and not perform the command. [131057060060] |You can override that by escaping the wildcard, if you really want it passed to the application. [131057060070] |In this case you do since you want the other side shell to expand it. [131057060080] |So use: [131057060090] |This is actually the correct behavior, since if you did have some local *.txt files in your home they would be expanded to a name that might not exist on the remote. [131057060100] |That's not what you want. [131057070010] |Shell script error message [131057070020] |I am trying to learn shell scripting by reading linuxcommand.org. [131057070030] |For some reason I keep getting the error "Not Found", even though it runs all the commands. [131057070040] |The error codes I get are: [131057070050] |Here's the script: [131057080010] |I'm going to make some guesses... vim doesn't like if( so try if ( that may be just a vim thing though. [131057080020] |I think your real issue is that you call run_backup before you've created it. put your function before your if then else block. the shell has to be aware of functions before it can use them. this is true of programming in general. [131057090010] |I think i solved it myself. [131057090020] |Seems like i got owned by the fact that ubuntu uses dash as default, and i was running the script using the sh command... "sigh" [131057100010] |Your functions need to be defined before you use them. [131057110010] |The standard construct to define a function is [131057110020] |In ksh, bash and zsh, but not in other shells such as dash, you can define a function with [131057110030] |What happened when you ran the script with dash was: [131057110040] |
  • The shell executed the $(run_backup) line, naturally resulting in the first error message since there was no command called run_backup.
  • [131057110050] |
  • The shell later executed the function run_backup, naturally resulting in the second error message since there was no command called function.
  • [131057110060] |
  • Then the shell executed the braced block, which on its own is valid shell syntax.
  • [131057110070] |Note that as xenoterracide's comment indicates, since you have #!/bin/bash, the script would have been run by bash if you hadn't done something wrong. [131057110080] |Even if you run the script with the correct shell, or (my recommendation) change function to the standard syntax (I don't see any other non-standard construct), you still need to move the function definition to before it's used. [131057110090] |One more useful tip: when you're debugging a shell script, run bash -x /path/to/script (or sh -x /path/to/script, etc.). [131057110100] |This prints a trace of each executed line. [131057110110] |If you want to trace just part of a script, you can use set -x to enable traces and set +x to disable them. [131057110120] |Zsh has better traces than the others, but it has incompatible syntax; you can tell zsh to expect sh syntax with the command emulate sh 2>/dev/null (which has no effect in other shells). [131057120010] |Add a one-off scheduled task through a shell script? [131057120020] |I'd like to execute a task at a given time, once. [131057120030] |I know I can just use crontab -e to add it there, but I'd like to do that through a shellscript. crontab -e gives me some temporary file to edit which is unpredictable. [131057120040] |Also, once the task executes I'd like to remove it from the crontab again to make sure it'd not leaving behind mess. [131057120050] |So is there a standardized way to add/remove entries from my personal crontab through a script? [131057120060] |I know I could roll my own system: Have a script that executes every minute that executes and deletes .sh files from a folder and have my "addtask" script create .sh files in that folder, but before rolling my own I wonder if there is something already. [131057120070] |If it matters, I use Mac OS X and zsh but I wanted to use something that works on other *nixes as well. [131057130010] |I think the "at" command is what you are after. [131057130020] |http://www.softpanorama.org/Utilities/at.shtml [131057140010] |DNS Troubleshoot. Campus domain does not resolve when using campus network. [131057140020] |Hello, I am having a very odd DNS issue. [131057140030] |I live on-campus and the university provides Internet access with my rent, but I can not get access to the university websites. [131057140040] |I am hoping to get direction on how to diagnosis my specific DNS issue as described below. [131057140050] |Thanks a ton! [131057140060] |I have an OpenWRT base router/firewall - 10.3.1-rc4 that works just grand. [131057140070] |I have no issues what so ever, except when I use the campus network, I can no resolve the university's domain. [131057140080] |I have verified that OpenWRT is using the campus name servers: [131057140090] |I signed up for 3rd party ISP for two months and when using the cable co as my ISP, I can resolve the university site just fine. [131057140100] |But why pay $40 per mo, when it is suppose to be included in my rent?! [131057140110] |I have also re-flashed the device to get rid of any potential malformed configs. [131057140120] |I run "tcpdump -n -i br-lan | grep .53 | grep domain.com" and then try to access the domain, but I do not get any output. [131057140130] |When I run the same (with different interface) on my workstation, it shows the router being queried for address. [131057140140] |I started a trouble ticket back in May 2009 and updated it through out the summer. [131057140150] |I can not get the university support to touch it. [131057140160] |Its easy to think its a problem with my router, but I say it is not so easy since EVERY other domain is just fine and even uwyo.edu works just fine when using 3rd party ISP. [131057140170] |The problem seems to be a combo of how UW does there stuff and OpenWRT.. [131057150010] |It seems that the campus has a special configuration for its own domain. [131057150020] |The UDP name resolution queries are probably not handled the same way when coming from the local private network (10...*) or from the public networks (Internet). [131057150030] |Did you try [131057150040] |or [131057150050] |Anyway, if you use the Google DNS servers, you are likely to get your problem fixed, since the NS is outside the campus on the public side. [131057150060] |Change /tmp/resolv.conf.auto with [131057160010] |If I well understand, you are filtered when you try to output DNS request to outside, did you try to use directly public NS from your domain (129.72.60.8 or 129.72.60.2) ? [131057160020] |If it's working you can specify forward on theses IP's for domain uwyo.edu, and use of campus DNS for all others domains. [131057160030] |If not, the only way is to bypass the filtering rules with a VPN connection on a dedicated server (you can find some one cheaper than 40$/month). [131057170010] |Is there a way to move by screen lines in vim? [131057170020] |When editing files in vim, it is a small annoyance that when I move lines with j and k, it moves by file lines, rather than screen lines like other editors. [131057170030] |With my own files, I can deal with this by automatically wrapping to a certain length in the vim options, but this is no good for files from others. [131057170040] |Is there a vim command to either temporarily toggle the treatment of lines to screen lines, or alternate commands for moving that use screen lines? [131057180010] |You can move around the screen lines by using g in front of the commands: [131057180020] |You can also map the original commands to the g commands like this: [131057180030] |j moves by screen lines now. [131057190010] |How to make VLC play an iso file directly? [131057190020] |Running vlc /path/to/dvd-video.iso, I'm getting: [131057190030] |notes: [131057190040] |
  • The version of vlc is 1.1.3.
  • [131057190050] |
  • The DVD plays well when not in iso format (when it's normal files on the filesystem).
  • [131057200010] |Mount it as a loopback first. [131057200020] |Or try this... [131057210010] |Do you have to use vlc? mplayer can usually work out a video dvd iso (or even a partial incomplete iso) contains mpeg streams well enough to play them without loopback mounting or complicated flags. [131057210020] |Doing it this way means menus, subtitles, alternate soundtracks are unlikely to work tho. [131057220010] |retrieving names of all open pdf files (in evince or otherwise) [131057220020] |I constantly have many PDF files open. [131057220030] |These are usually downloaded using chrome and immediately opened using evince. [131057220040] |I sometimes want to persist the state of all my open PDF files, so I could re-open the same group of documents at a later time. [131057220050] |This mostly happens when I need to reboot and what to have the same set of documents re-opened, but sometimes I just want to keep a list of open documents for later. [131057220060] |Is there a way to get the names of all open pdf files, from evince or any other program? or at least, is there a way of asking evince to re-open the same set of documents after a reboot? [131057230010] |Under the assumption that the PDFs you are viewing have the extension .pdf, the following could work to get you a list of open PDFs: [131057230020] |If you only ever use Evince, see Gilles' similar answer. [131057230030] |On my machine (with a few pdfs open) displayed output as follows [131057230040] |To just get the filenames, we can use awk: [131057230050] |or even better, [131057230060] |We can save these results in a file: [131057230070] |And later, to restore them: [131057230080] |To make this happen automatically, you can use whatever mechanisms your desktop environment provides to run the "save" command on exit and the "open" command on login. [131057230090] |A bit more robustness can be added by ensuring that the pdf's returned by lsof are being opened by your user. [131057230100] |One advantage of this method is that it should work for any pdf viewer that takes command-line arguments. [131057230110] |One disadvantage is that it depends on file names; however, with a bit of poking, the requirement of knowing the filename extension could probably also be removed. [131057240010] |lsof lists the files open by a process. [131057240020] |Evince does keep the files open. [131057240030] |If you're the only user using evince, under Linux, here's how to see what files it's using: [131057240040] |To automate this, you'll want to keep only the open regular files of the evince process. [131057240050] |Here's a script that outputs the file names. [131057250010] |How do I clear Bash's cache of paths to executables? [131057250020] |When I execute a program without specifying the full path to the executable, and Bash must search the directories in $PATH to find the binary, it seems that Bash remembers the path in some sort of cache. [131057250030] |For example, I installed a build of Subversion from source to /usr/local, then typed svnsync help at the Bash prompt. [131057250040] |Bash located the binary /usr/local/bin/svnsync for "svnsync" and executed it. [131057250050] |Then when I deleted the installation of Subversion in /usr/local and re-ran svnsync help, Bash responds: [131057250060] |But, when I start a new instance of Bash, it finds and executes /usr/bin/svnsync. [131057250070] |How do I clear the cache of paths to executables? [131057260010] |bash does cache the full path to a command. [131057260020] |To clear the entire cache: [131057260030] |Or just one entry: [131057260040] |More info in help hash and man bash . [131057270010] |To clear just one entry you need a different flag: [131057270020] |The -r flag doesn't take a parameter and will always delete the entire cache. [131057270030] |(At least in bash 3.2.39 on debian lenny) [131057280010] |System.map file update [131057280020] |I found that System.map file contains addresses of symbols. [131057280030] |Does it involve system calls? [131057280040] |I read that it is only updated when a new kernel is newly compiled. [131057280050] |So does that means that except for a new kernel compilation, these are always stored in the same address? [131057290010] |System.map contains a symbol table, i.e. a list of function names in the Linux kernel, with for each function the address at which its code is loaded in memory (the addresses are not physical addresses, they're in the kernel's address space, like any executable symbol table is in the loaded process address space). [131057290020] |This isn't limited to system calls (the interfaces exposed to user processes): the file also lists functions that might be called by a loaded module, and even internal functions. [131057290030] |The system calls are the symbols whose name begins with sys_. [131057290040] |The addresses are associated to a particular kernel binary (vmlinux, bzImage or other format; the image format doesn't change the addresses, it's just an encoding); they are reproducible for a given kernel source, configuration and compiler. [131057290050] |The file is generated by scripts/mksysmap near the end of the kernel build process; it is the output of the nm command. [131057290060] |The file is used mainly for debugging, but it's also read when compiling some third-party modules that use unstable kernel interfaces (unstable as in changing from one version to the next). [131057300010] |Who wrote the "Linux kernel" (Linus Torvalds and his team)? [131057300020] |Who are the authors of the pure Linux kernel from scratch, which was integrated with GNU tools and formed the full GNU/Linux Operating system in the 1990s? [131057300030] |I have read some wiki articles but I haven't got any clear cut idea on the history. [131057310010] |Richard Stallman, father of the GNU Project Linus Benedict Torvalds, the author of Linux OS(Linux version 0.01 was released by mid September 1991). [131057310020] |The real story is : [131057310030] |

    Year 1991 :

    [131057310040] |DOS brought by Bill Gates was reigning the world of personal computers. [131057310050] |The other player in the personal computer world was UNIX by Bell Labs, but it was extremely expensive and the source was not publicly available. [131057310060] |Then there was the MINIX by Andrew Tanenbaum, which was not a superb OS but made the source code was publicly available. [131057310070] |Tanenbaum captured the souls of computer science with the elaborate and lively discussion of the art of creating a working OS. Students of Computer Science all over the world went through the book, reading through the codes to understand the very system that runs their computer, and one of them was Linus Torvalds. [131057310080] |The GNU project created a lot of the tools line GCC, etc. but still there was no OS. [131057310090] |For rest of the story and how Linux was written please read through the following link. [131057310100] |

    Linux History Time line :

    [131057320010] |The wikipedia page has a fairly clear history. [131057320020] |Linus Torvalds, then a student, wrote his own kernel in the summer of 1991 because he was unhappy with the available Unix kernels: Unix itself (with the Bell Labs code) was extremeley expensive (even PC unices such as Xenix), there was Andrew Tanenbaum's MINIX but it was only available to purchasers of Tanenbaum's book, and Torvalds was unaware of the effort led by Berkeley University to produce a free Unix (BSD), and BSD didn't run on PCs yet at the time. [131057320030] |Since then, thousands of people have contributed to the kernel, most of them in the form of drivers. [131057330010] |How to understand Hot plugin mechanism for a USB in an Android stack ? [131057330020] |I want to know where I could get some material to study USB hot plug-in mechanism in Android Stack. [131057330030] |I tried googling many a times, but din't find anything useful. [131057340010] |Okay found it...on http://android.git.kernel.org it is located in kernel/linux-2.6.git/drivers/usb/ [131057350010] |How to get the inserted kernel module? [131057350020] |For debugging by GDB, I would like to have a kernel module whose source code I dont have. [131057350030] |I suspect its a virus. [131057350040] |Is there any way, so that I could fed it to GDB for analysis [131057360010] |From a debugging perspective, the kernel is a special "process", distinct from the user space processes, which communicate with the kernel via a sort of rpc mechanism (syscalls) or mapped memory.. [131057360020] |I dont think you can see the kernel's data structure simply by inspecting some random user process. [131057360030] |Another problem is, that every user space process (including the debugger) needs the kernel to run and to communicate with the users, I dont think you can just stop the kernel and believe that the debugger will continue to run. [131057360040] |So you need to run gdb on a second machine and that is what is called as Kernel debugging. [131057360050] |Please refer to (http://kgdb.linsyssoft.com/, Documentation/sh/kgdb) for more details. [131057370010] |How to speed my too-slow ssh login? [131057370020] |Running ssh user@hostname takes ~30s. Here's the scenario: [131057370030] |
  • this is a VM on the local LAN
  • [131057370040] |
  • Windows and Mac machines get instant login
  • [131057370050] |
  • am using Debian and I could reproduce with an Ubuntu machine
  • [131057370060] |
  • someone using Ubuntu says that logging into my machine (local LAN) is also instant
  • [131057370070] |
  • using hostname IP address takes about half as much time (~15s)
  • [131057370080] |[update] [131057370090] |Using ssh -vvv user@hostname, here's where it waits the most: [131057370100] |And then it waits a bit here: [131057380010] |Have you verified your DNS setup? [131057380020] |Try the setting mdns off in /etc/host.conf. [131057380030] |This disables the mdns resolution and helped me a lot. [131057380040] |EDIT: [131057380050] |It seems gentoo is handling this a bit different. [131057380060] |To disable multicast DNS lookups, you have to change the file /etc/nsswitch.conf. [131057380070] |There should be something like: [131057380080] |Change it to: [131057390010] |Edit your "/etc/ssh/ssh_config" and comment out these lines: [131057400010] |I had this problem and ended up turning off Reverse DNS resolution in SSH [131057400020] |So in sshd_config change this: [131057400030] |to this: [131057410010] |What happens when a USB drive is plugged in? [131057410020] |What is the flow of USB events from kernel space to user space? [131057410030] |I get a popup on my desktop when I plug in a USB drive -- what is the sequence of events that leads from detecting the new USB drive to mounting it and showing that popup? [131057410040] |(I know it's a very broad question, but please guide me with some pointers on how to understand the whole idea behind it. [131057410050] |I have no problem in browsing large blocks of code if I can be guided correctly.) [131057420010] |I'm not completely confident with this yet, so take it with a grain of salt and more research. [131057420020] |It starts with the kernel hotplug subsystem. [131057420030] |After a device is setup, it either calls whatever userspace program is setup to handle hotplug events (if one was set by echo hotplug_handler >/proc/sys/kernel/hotplug) or sends a data packet over the kobject_uevent netlink socket. [131057420040] |When the kernel launches the hotplug handler, it sets up some environment variables. [131057420050] |When the kernel sends a datapacket, it includes key=value pairs. [131057420060] |If you want to, you can setup a script that just logs the environment and set it up as the handler (not on your production system, of course - a test setup). [131057420070] |Usually, udev is setup as the handler, and it will have several rules setup about how to handle events. [131057420080] |From there, it can launch other programs that do other things (like issue dbus messages). [131057420090] |These udev rules are highly dependent on the particular distribution of interest. [131057420100] |There is a lot of information in this thread where someone is trying to write some documentation - note the first message is not accurate; keep reading. [131057430010] |This is handed by udev on modern Linux systems. [131057430020] |The udev daemon started with the system will search in /etc/udev/rules.d and /lib/udev/rules.d and will run matching rules for kernel events. [131057430030] |Inserting a USB drive will trigger an event, udev will search for a matching rule and will execute it. [131057430040] |The rules themselves will determine what your system does. [131057430050] |In recent years, udev has communicated to HAL, which would alert applications via DBUS. [131057430060] |This approach is now obsolete in favor of a unified udev solution, which I presume will involve udevd communicating via dbus directly, or via dbus-send. [131057430070] |You can monitor the activities of udev via 'udevadm monitor'. [131057440010] |How to uninstall GRUB [131057440020] |I just installed Arch Linux and it's not working well, and now I can't access Windows because of GRUB. [131057440030] |How can I uninstall GRUB through the Arch Linux shell? [131057450010] |You cannot 'uninstall' grub. [131057450020] |You can overwrite it by Windows bootloader. [131057450030] |I'm afraid that most people who know how to reinstall Windows bootloader without reinstalling Windows are on superuser (I guess for most people here Windows is the second system) - you have to do it from Windows install disk or similar tools. [131057450040] |You should be able to chainload the Windows bootloader from grub by following code BTW: [131057450050] |or entering commandline in GRUB [131057450060] |PS. [131057450070] |What is not working well? [131057450080] |It is part we can help with. [131057450090] |Have you tried simpler distributions like for example Ubuntu? [131057460010] |I had to get rid of the dual boot in my work computer (needed space for work stuff) and I used the MBRFix utility which you can download here: [131057460020] |http://www.sysint.no/products/Download/tabid/536/language/en-US/Default.aspx [131057470010] |installing ffmpeg-php on centos [131057470020] |With the help of this guide, I am trying to install ffmpeg using these commands: [131057470030] |But when I run this replace 'PIX_FMT_RGBA32' 'PIX_FMT_RGB32' -- * I get this error: [131057470040] |replace: Error reading file 'autom4te.cache' (Errcode: 21) replace: Error reading file 'build' (Errcode: 21) ffmpeg_frame.c converted replace: Error reading file 'include' (Errcode: 21) replace: Error reading file 'modules' (Errcode: 21) replace: Error reading file 'tests' (Errcode: 21) [131057470050] |And when I run make and skip that line I get: [131057470060] |Any ideas? [131057480010] |The errors from replace are harmless, it's just telling you (cryptically) that these files are directories and it can't act on them. [131057480020] |But you do need to run phpize and ./configure … before you can run make. [131057490010] |Can't you use yum? [131057490020] |On Ubuntu doing aptitude install php5-ffmpeg seems to automatically install ffmpeg and all it's dependencies. [131057490030] |Perhaps the same package is available for CentOS? [131057500010] |vim (Vi IMproved) is a modular text editor that is upwards compatible to vi. [131057500020] |The speciality lies within the editing modes which provide different tasks. [131057500030] |Each mode can be reached by one or more specified keys. [131057500040] |
  • Normal (ESC): copy, (re)move, replace words and lines, text formatting
  • [131057500050] |
  • Insert (i,a,o): insert or remove text
  • [131057500060] |
  • Visual (v): visually highlight a text area
  • [131057500070] |
  • Select (gh): modify selected text area
  • [131057500080] |
  • Command-line (:): complex commands, search(-replace), filter, (un)set options
  • [131057500090] |
  • Ex (Q): like command-line mode but stays in Ex mode after entering a command
  • [131057500100] |Further, vim provides the following features: [131057500110] |
  • Syntax highlighting
  • [131057500120] |
  • Autocorrection
  • [131057500130] |
  • Tabs and Splitscreens
  • [131057500140] |
  • undo/redo operations
  • [131057500150] |
  • Plugin support over own scripts
  • [131057500160] |
  • Archive editing (tar, gz, zip)
  • [131057500170] |
  • History
  • [131057500180] |
  • Wrapping
  • [131057500190] |
  • Macros
  • [131057500200] |
  • and more...
  • [131057500210] |On modern Linux systems, vi is generally linked to vim. [131057500220] |vimtutor gets you started with vim in a short time. vim also provides a great help section, start the editor and type :help. [131057500230] |More ressources: [131057500240] |
  • vim.org
  • [131057500250] |
  • man vim
  • [131057500260] |
  • /usr/share/vim/current/doc
  • [131057510010] |vim (Vi IMproved) is a text editor supporting different editing modes [131057520010] |Chakra project or netinstall -- Arch Linux installation [131057520020] |My arch linux installation didn't go well (apparently I should have gone through netinstall instead of core). [131057520030] |When I try again, should I use the Chakra project (which is supposedly easier), or should I use netinstall? [131057530010] |

    Describe where you are stuck?

    [131057530020] |Core is fine, use ethernet cable for internet connection. [131057530030] |Being new to linux, wifi install works for many out of the box, but if not it may frustrate a bit. [https://wiki.archlinux.org/index.php/Wireless_Setup] [131057530040] |You will need wpa_supplicant, wireless_tools, and the driver for your card. [131057530050] |When you select packages from the ' [*]dev_base ' during the installation. [131057530060] |You'll want to try to install arch at least once successfully before moving to chakra since it's based on arch, if you have issues this will help you get used to finding answers. [131057530070] |Skim through the sections, then read the parts your not clear on. [131057530080] |[ https://wiki.archlinux.org/index.php/Beginners_Guide ] [131057530090] |It may also be a good choice to practice installing arch then chakra on virtual box before going live with your real system. [131057530100] |Try to have another computer to read guides from while installing so you can troubleshoot. [131057540010] |Is there a limit of hardlinks for one file? [131057540020] |Is there a limit of number of hardlinks for one file? [131057540030] |Is it specified anywhere? [131057540040] |What are safe limits for Linux? [131057540050] |And what for other POSIX systems? [131057550010] |This is file system dependent. [131057550020] |ext2/3/4 limit is 65k links [131057550030] |ext4 source line 595, struct ext4_inode -> __le16 i_links_count [131057560010] |Looking at the ext3 inode structure disk format in the linux kernel sources (*include/linux/ext3_fs.h*) that lists the links count as being a 16 bit number [131057560020] |I guess that means that an ext3 filesystem can have up to 65535 links. [131057560030] |I haven't checked the values for other filesystems. [131057570010] |Posix requires that the operating system understand the concept of hard links but not that hard links can actually be used in any particular circumstance. [131057570020] |You can find out how many hard links are permitted at a particular location (this can vary by filesystem type) by calling pathconf(filename, _PC_LINK_MAX). [131057570030] |The minimum limit (_POSIX_LINK_MAX) is 8, but this is rather meaningless as link() can report many other errors anyway (permission denied, disk full, …). [131057570040] |The stat structure stores the link count in a field of type nlink_t, so the type of this field gives an upper limit on your system. [131057570050] |But there's a good chance you'll never be able to reach that far: it's common to have a 32-bit nlink_t but only 16 bits in many filesystems (a quick grep in the Linux source shows that ext[234], NTFS, UFS and XFS use 16-bit link counts in the kernel data structures). [131057580010] |Using URLs with parenthesis with Lynx [131057580020] |I often use Lynx on a remote computer to look at websites (faster than port-forwarding). [131057580030] |Sometimes the URLs I want to go to have un-escaped characters (for example brackets) that Lynx seems to need encoded. for example http://www.example.com/This(URL)is anExample.html should be http://www.example.com/This%28URL%29is%20anExample.html. [131057580040] |Is there an existing script for this? [131057580050] |Alternatively is there some option for Lynx that would make it unnecessary? [131057590010] |You can escape a string on the command line by using single ticks, so [131057590020] |Will pass the URL unchanged to lynx, or any other program. [131057600010] |Why does high disk I/O reduce system responsiveness/performance? [131057600020] |I never quite understood why high disk I/O slowed the system so much. [131057600030] |It's strange to me because I would expect the slow-down to affect only those processes dependent on the hard/optical drive data, but the slow-down affects even stuff loaded onto RAM. [131057600040] |I'm here referring to iowait. [131057600050] |Why does the processor wait, instead of doing other work? [131057600060] |Can anyone explain this limitation and why it hasn't been solved in Linux kernel? [131057600070] |Is there a kernel out there that doesn't have this problem? [131057600080] |[note] There has been some progress in this performance area. [131057600090] |For one, the later kernels (2.6.37 in my case) are much more responsive. [131057610010] |Operating systems make use of virtual memory so that more memory can be used than there is physical RAM available. [131057610020] |When the kernel decides that it has a better use for a physical memory page, its content may be "paged out" for storage on disk. [131057610030] |When such a virtual memory page is accessed while paged out, it generates a page fault and is moved back from the disk to RAM. [131057610040] |Page faults are a disaster for performance because disk latency is measured in milliseconds, while RAM latency is measured in nanoseconds. [131057610050] |(1 millisecond = a million nanoseconds!) [131057610060] |Take a look at this visualization where 1 pixel = 1 nanosecond; zoom in at the top! [131057610070] |Memory is not only used by user processes, but also by the kernel for things like file system caching. [131057610080] |During file system activity, the kernel will cache recently used data. [131057610090] |The assumption is that there is a good chance that the same data will be used again shortly, so caching should improve I/O performance. [131057610100] |Physical memory being used for the file system cache cannot be used for processes, so during file system activity more process memory will be paged out and page faults will increase. [131057610110] |Also, less disk I/O bandwidth is available for moving memory pages from and to the disk. [131057610120] |As a result processes may stall. [131057620010] |As far as I understand it, IOwait, means that a process, not the processor, is waiting for IO to become available. [131057620020] |Processors have gained much more speed than hard drives, meaning more code will finish faster and then the disk will need to be read. [131057620030] |When multiple more needs to be read than the drive can do fast enough, you end up with processor's waiting. [131057620040] |The way it's decided who gets to read/write to the disk, is determined by the block scheduler, in most cases now CFQ. [131057620050] |If you're using CFQ, and you need a process to use less of the overall IO time to increase system responsiveness, you can use ionice -c3 this is tells the system to only give this process IO when nothing else needs IO. [131057620060] |this is still interesting and explains the iowait problem better. [131057630010] |Suspending my laptop breaks ethernet over firewire, are there commands which can fix it? [131057630020] |As mentioned in this question I am using a firewire cable to provide a private network between my laptop and my desktop, because it makes using the screen sharing program synergy much nicer than using WIFI. [131057630030] |However when I leave my office for the day and I suspend my laptop, when I return the next day, the desktop and the laptop cannot communicate over firewire anymore. [131057630040] |The firewire0 device still has an IP address. but when I try and ping the desktop I get no route to host [131057630050] |I'm using kernel 2.6.35-24-generic #42-Ubuntu SMP x86_64 on Ubuntu 10.10. [131057630060] |Is there some way I can remedy this without a reboot? [131057630070] |Like, removing some kernel modules and re-inserting them? [131057630080] |EDIT: Here's what I have tried so far and the results: [131057630090] |EDIT 2: I also tried removing the modules prior to suspending, and re-inserting after resuming. [131057630100] |This did not work either :-( [131057640010] |Have you tried using modprobe? [131057640020] |Running dmesg | grep firewire, I get: [131057640030] |This says that the name of the module is firewire_ohci. [131057640040] |So I run (as root) modprobe -vr firewire_ohci &&modprobe -v firewire_ohci. [131057640050] |There removes and inserts the modules, respectively. [131057650010] |Have you tried removing the physical cable. it should alert the stack that the connection needs to be rebuilt. [131057650020] |Disabling the firewire connection and re-enabling it may work as well. [131057650030] |You could add a script that does that automagically on resume. [131057650040] |DC [131057660010] |I think that ARP-table just becomes empty. [131057660020] |Try this: [131057670010] |Video editor w/ stabilization? [131057670020] |As I slowly migrate from Apple's Mac OS X to a Linux distribution, I try to find good replacements for Apple's iLife software suite. [131057670030] |I particularly like the current iMovie's image stabilization feature, where at the cost of losing a bit of resolution, shaky footage can be stabilized so that you almost can't tell the camera was shaking. [131057670040] |Is there an open source Linux video editor that has a similar feature? [131057680010] |Cinelerra is the only tool I'm aware of that can do this in Linux. [131057680020] |There is a tutorial on just this topic. [131057690010] |Tar up all PDFs in a directory, retaining directory structure [131057690020] |I'm trying to create a compressed tarball that contains all PDF files that exist in one of my directories. [131057690030] |The directory structure needs to be retained. [131057690040] |Empty directories are not needed, but I really don't care if they're there. [131057690050] |For example, say I had a directory that looked like this: [131057690060] |After running the command, I'd like to have dir.tar.gz contain this: [131057690070] |Possible? [131057700010] |This will list all the PDFs: [131057700020] |You can pipe that to xargs to get it as a single space-delimited line, and feed that to tar to create the archive: [131057700030] |(This way omits the empty directories) [131057710010] |With bash ≥4 or zsh and GNU tar: [131057710020] |This might not work if you have a very large number of PDF files and the command line is too long. [131057710030] |Then you would need a more complex find-based solution (again, using GNU tar): [131057710040] |Alternatively (and portably) you can create the archive with pax. [131057710050] |The first -s says to include all .pdf files, without changing their name. [131057710060] |The second -s says to rename all other files to an empty name, which actually means not to include them in the archive. [131057720010] |What if 'kill -9' doesn't work? [131057720020] |I have a process I can't kill with kill -9 process-pid. [131057720030] |What's the problem in such a case, considering that nothing should evade that kill option. [131057720040] |[note] I am the owner of that process. [131057730010] |Did you use sudo before command? [131057730020] |It should help. [131057740010] |It sounds like you might have a zombie process. [131057740020] |This is harmless: the only resource a zombie process consumes is an entry in the process table. [131057740030] |It will go away when the parent process dies or reacts to the death of its child. [131057740040] |You can see if the process is a zombie by using top or the following command: [131057750010] |Sometime process exists and cannot be killed due to: [131057750020] |
  • being zombie. [131057750030] |I.e. process which parent did not read the exit status. [131057750040] |Such process does not consume any resources except PID entry. [131057750050] |In top it is signalled Z
  • [131057750060] |
  • errornous uninterruble sleep. [131057750070] |It should not happen but with combination of buggy kernel code and/or buggy hardware it sometime does. [131057750080] |The only method is to reboot or wait. [131057750090] |In top it is signalled by D.
  • [131057760010] |kill -9 (SIGKILL) always works, provided you have the permission to kill the process. [131057760020] |Basically either the process must be started by you and not be setuid or setgid, or you must be root. [131057760030] |There is one exception: even root cannot send a fatal signal to PID 1 (the init process). [131057760040] |However kill -9 is not guaranteed to work immediately. [131057760050] |All signals, including SIGKILL, are delivered asynchronously: the kernel may take its time to deliver them. [131057760060] |Usually, delivering a signal takes at most a few microseconds, just the time it takes for the target to get a time slice. [131057760070] |However, if the target has blocked the signal, the signal will be queued until the target unblocks it. [131057760080] |Normally, processes cannot block SIGKILL. [131057760090] |But kernel code can, and processes execute kernel code when they call system calls. [131057760100] |Kernel code blocks all signals when interrupting the system call would result in a badly formed data structure somewhere in the kernel, or more generally in some kernel invariant being violated. [131057760110] |So if (due to a bug or misdesign) a system call blocks indefinitely, there may effectively be no way to kill the process. [131057760120] |(But the process will be killed if it ever completes the system call.) [131057760130] |A process blocked in a system call is in uninterruptible sleep. [131057760140] |The ps or top command will (on most unices) show it in state D (originally for “disk”, I think). [131057760150] |A classical case of long uninterruptible sleep is processes accessing files over NFS when the server is not responding; modern implementations tend not to impose uninterruptible sleep (e.g. under Linux, the intr mount option allows a signal to interrupt NFS file accesses). [131057760160] |You may sometimes see entries marked Z (or H under Linux, I don't know what the distinction is) in the ps or top output. [131057760170] |These are technically not processes, they are zombie processes, which are nothing more than an entry in the process table, kept around so that the parent process can be notified of the death of its child. [131057760180] |They will go away when the parent process pays attention (or dies). [131057770010] |Kill actually means send a signal. there are multiple signals you can send. kill -9 is a special signal. [131057770020] |When sending a signal the application deals with it. if not the kernel deals with it. so you can trap a signal in your application. [131057770030] |But I said kill -9 was special. [131057770040] |It is special in that the application doesn't get it. it goes straight to the kernel which then truly kills the application at the first possible opportunity. in other words kills it dead [131057770050] |kill -15 sends the signal SIGHUP which stands for SIGNAL HANGUP in other words tells the application to quit. this is the friendly way to tell an application it is time to shutdown. but if the application is not responding kill -9 will kill it. [131057770060] |if kill -9 doesnt work it probably means your kernel is out of whack. a reboot is in order. [131057770070] |I can't recall that ever happening. [131057780010] |If @Maciej's and @Gilles's answer's don't solve your problem, and you don't recognize the process (and asking what it is with your distro doesn't turn up answers ). [131057780020] |Check for Rootkit's and any other signs that you've been owned. [131057780030] |A rootkit is more than capable of preventing you from killing the process. [131057780040] |In fact many are capable of preventing you from seeing them. [131057780050] |But if they forget to modify 1 small program they might be spotted ( e.g. they modified top, but not htop ). [131057780060] |Most likely this is not the case but better safe than sorry. [131057790010] |Check your /var/log/kern.log and /var/log/dmesg (or equivalents) for any clues. [131057790020] |In my experience this has happened to me only when an NFS mount's network connection has suddenly dropped or a device driver crashed. [131057790030] |Could happen if a hard drive crashes as well, I believe. [131057790040] |You can use lsof to see what device files the process has open. [131057800010] |You wrote you own the process so my reply is slightly off topic but for the record, note that the init process is immune to SIGKILL. [131057810010] |There are cases where even if you send a kill -9 to a process, that pid will stops, but the process restarts automatically (for instance, if you try it with gnome-panel, it will restarts): could it be the case? [131057820010] |No output produced when using winFF [131057820020] |I am using winFF in Ubuntu 10.04. [131057820030] |WinFF is a graphical frontend to ffmpeg. [131057820040] |Typically, after selecting a file in WinFF, setting my conversion settings and pressing the "Convert" button, a console appears with output from the conversion process and prompts requesting permission to continue. [131057820050] |However, now, when I press convert, I only see a blank console with a command prompt such as: [131057820060] |I thought I had misconfigured something, but I reinstalled everything but problem continues. [131057820070] |Could you help me ? [131057820080] |In general console output still works, since I see output when I run the following script: [131057830010] |I don't know anything about this software, but after quickly installing it, I get the impression it was not well packaged. [131057830020] |You said that you tried reinstalling the package; however, when you reinstalled it, did you also remove its configuration files? [131057830030] |By default in Debian, Ubuntu, and other distributions, configuration files are left behind in case you reinstall again. [131057830040] |I would try something like the following(as root): [131057830050] |From the apt-get manual page: [131057830060] |purge [131057830070] |purge is identical to remove except that packages are removed and purged (any configuration files are deleted too). [131057840010] |I had the same problem. [131057840020] |Solution (not nice though): [131057840030] |
  • Go on Options and Select Display CMD Line
  • [131057840040] |
  • Click on Convert: [131057840050] |
  • A terminal appears with commands
  • [131057840060] |
  • Copy the contents of the window, save it in a file
  • [131057840070] |
  • Run at as a shell script (sh filename)
  • [131057850010] |Does Solaris have an equivalent to /etc/ld.so.conf? [131057850020] |I compiled a package for Solaris 11 Express that has some library dependencies, which I also compiled from source and installed in the usual /usr/local. [131057850030] |(And Solaris doesn't even have /usr/local pre-created!) [131057850040] |So, my program runs correctly, but I have to run it with [131057850050] |or it complains that it couldn't find libsomething.so. [131057850060] |How do I include /usr/local/lib in the library search path, system-wide? [131057850070] |Linux has /etc/ld.so.conf -- Solaris doesn't. [131057860010] |Check out the section about setting up the linker: http://bwachter.lart.info/solaris/solfaq.html [131057860020] |You want the crle command. [131057870010] |If it's acceptable for you, you can set a library search path when compiling (more precisely, when linking). [131057870020] |Pass the -rpath option to ld, or tell the compiler to do so, e.g. [131057880010] |Tri-booting Windows, Ubuntu, and SUSE [131057880020] |I have just bought a new laptop (Packard Bell EasyNote TM 87) which has Windows 7 preinstalled. [131057880030] |There are already two partitions labelled PQSERVICE, SYSTEM RESERVED and Packard Bell C:. [131057880040] |I cannot remove this Windows installation because I don't have a copy of the Windows-only recovery DVDs. [131057880050] |I want to tri-boot Windows, Ubuntu, and SUSE. [131057880060] |First, I created an extended partition using gparted with two partitions and was left with some unallocated space. [131057880070] |I installed Ubuntu in the extended partition by creating / and /home logical partitions. [131057880080] |Now I want to install SUSE in the unallocated space. [131057880090] |The SUSE 11.3 live CD does not list that unallocated space. [131057880100] |What should I do? [131057890010] |You can have a maximum of four non-logical partitions (non-logical meaning primary or extended). [131057890020] |Also most tools only support one extended partition. [131057890030] |Linux doesn't care if it's on logical or primary partitions. [131057890040] |I think your best bet is to resize that extended partition so that it covers everything except the Windows partition and the PQSERVICE partition. [131057890050] |Gparted can do this without affecting the logical partitions that are already there. [131057890060] |If my answer doesn't help, boot a Linux live CD and report the output of fdisk -l (type this command in a terminal, and if at all possible copy-paste the output). [131057900010] |Are there any side effects when two distros share a swap partition? [131057900020] |In order to save disk space, I want to have two OS installations share a single swap partition (a dual-boot). [131057900030] |Is this a good idea? [131057910010] |One side effect I can think of is: first, hibernate system1 (using the swap partition for hibernation), then boot system2. [131057910020] |You could loose data. [131057920010] |It's possible. [131057920020] |In fact, you can share the swap space between completely different operating systems, as long as you initialize the swap space when you boot. [131057920030] |It used to be relatively common to share swap space between Linux and Windows, back when it represented a significant portion of your hard disk. [131057920040] |Two restrictions come to mind: [131057920050] |
  • The OSes cannot be running concurrently (which you might want to do with virtual machines).
  • [131057920060] |
  • You can't hibernate one of the OSes while you run another.
  • [131057930010] |One of my friends have tried this. [131057930020] |He has installed five or six distros in a single Hard drive. [131057930030] |The first primary partition is for grub and he is able to boot to all the distros. [131057930040] |The second partition is swap. [131057930050] |The third partition is an extended partition and each of the distros is installed into their own logical partitions. [131057930060] |All of the distros are booting and can hibernate. [131057930070] |I think you just need to make sure and select the correct distro after resuming from hibernation. [131057930080] |So, on the basis of his experiment I should say YES. [131057930090] |This is possible. [131057930100] |But i think it can break things. [131057930110] |What if distro 2 wakes up and distro 1's resume file is using up the swap partition, what's the next thing going to happen? [131057930120] |So i too agree with all the above posts. [131057930130] |Why dont you try to split the swap parttions, rather than taking this huge risk. [131057940010] |How does ssh -X function ? [131057940020] |When using ssh -X is the executable copied and run locally or is it run on the host machine. [131057940030] |Since it is called X11 forwarding it makes me think that the window is drawn on my machine but running on the host. [131057950010] |That's right, the application is ran on the host while displayed on the local machine. [131057960010] |The application runs remotely, except the X components (i.e. rendering the x-commands etc) which are running locally. [131057960020] |Every client application usually uses the local X server to display the UI. [131057960030] |In this case, the commands are send via the encrypted SSH channel from the remote machine to your local machine and are displayed there. [131057970010] |The executable is run on the remote machine and displayed (drawn) on the local machine. [131057970020] |What ssh -X remote does is start up a proxy X11 server on the remote machine. [131057970030] |If you do echo $DISPLAY on the remote machine, you should see something like localhost:21.0. [131057970040] |That is telling the program running on the remote machine to send drawing commands to the X11 server with id 21. [131057970050] |This then forwards those commands to the real X11 server running on the local machine, which draws on your screen. [131057970060] |This forwarding happens over an encrypted ssh connection, so they can't be (easily) listened to. [131057970070] |Unlike Windows, Mac OS, etc, X11 was designed from the beginning to be able to run programs across a network, without needing things like remote desktop. [131057970080] |For a while, X11 thin clients were popular. [131057970090] |It is basically a stripped down computer that only runs a X11 server. [131057970100] |All of the programs run on some application server somewhere. [131057980010] |Setting $DISPLAY is only half of the deal though. [131057980020] |In order to be able to authenticate the clients on the server side, ssh also utilizes xauth to create a new authentication cookie. [131057980030] |See xauth list and ~/.Xauthority. [131057990010] |The key may be to realize that the X server is a single thing which provides graphics to a user, and all the different programs which want graphics have to be X clients and connect to a server. [131057990020] |The interface between client programs and the X server was designed from the start to support connections with remote programs, not just those on the local machine. [131057990030] |In a crazy enough network, a program could be run anywhere and display its graphics anywhere else... [131058000010] |Chakra overriding Arch?/ Is this possible? [131058000020] |My Arch installation didn't go well so can I install Chakra project to override the Arch installation?? [131058000030] |By just installing Chakra in the same partition with Arch? [131058000040] |Is it possible? [131058000050] |How hard is the Chakra installation? [131058010010] |I've never installed chakra, my suggestion, install it on the same partition, but make sure that you tell chakra to format the partition, lose all data, etc. [131058020010] |How to enter a tab character in vim with SuperTab plugin enabled? [131058020020] |How to enter a tab character in vim with SuperTab plugin enabled? [131058030010] |I've not used this extension myself, but I would guess that ^V-Tab might work. ^V in general can be used in insert mode to insert a literal keystroke instead of whatever that key is mapped to do. [131058030020] |So you type Control-V, then hit whatever key or key combo you want to insert literally. [131058040010] |You could also use the indent functionality by typing >>which depending on your indent settings would use a tab character. [131058050010] |How to install Flash player plugin on Fedora? [131058050020] |Anyone have a recommended way to get and install Flash effectively on Fedora? [131058060010] |Adobe’s Flash plugin is not included in Fedora because it is not free. [131058060020] |Adobe has released a version of the Flash plugin for Linux. [131058060030] |

    Enabling Flash Plugin

    [131058060040] |To begin, refer to the Adobe site at http://get.adobe.com/flashplayer/ This will download the adobe-release-i386-1.0-1.noarch.rpm file. [131058060050] |Issue the following command within the directory where you have downloaded the repository rpm file. [131058060060] |The .rpm file also copies the adobe General Public Key (GPG key) to /etc/pki/rpm-gpg/RPM-GPG-KEY-adobe-linux but does not import it. [131058060070] |To import the key, type: [131058060080] |The system is now ready to fetch rpm packages from adobe using yum. [131058060090] |To verify this, take a look at the /etc/yum.repos.d/adobe-linux-i386.repo file that was just created. [131058060100] |You should see something similar to the following: [131058060110] |Notice that the file contains the URL where the packages are located, whether or not the repository should be enabled, whether rpm should check downloaded packages from adobe against the GPG key, and the location of the key itself. [131058060120] |Now proceed to either the instructions for 32-bit (i386) platforms(I assume that..) [131058060130] |On 32-bit Fedora [131058060140] |After completing the adobe repository configuration, run the following command to install the Flash plugin and ensure sound is enabled: [131058060150] |You may see messages indicating that alsa-plugins-pulseaudio and libcurl are already installed. [131058060160] |This is not a problem. [131058060170] |Note also that nspluginwrapper is installed by default and will wrap the Flash plugin. [131058060180] |If you have upgraded from a previous version of Fedora, make sure that nspluginwrapper is installed and the latest version: [131058060190] |Next, you should check the plugin. [131058060200] |Checking the plugin [131058060210] |After the installation, exit all copies of Firefox and start it again to enable the plugin. [131058060220] |Then type the following text in the Firefox address bar: [131058060230] |This information tells you that the Adobe Flash plugin has been successfully installed. [131058070010] |Have you tried going to adobe.com and downloading it? [131058080010] |Doing two things with output from a command. [131058080020] |I have a program texcount that outputs the number of words in my LaTeX document. [131058080030] |I can also pipe the output of this to sed to make the newlines TeX linebreaks and write this to a file which I can then include in my final document. [131058080040] |But when I do texcount foo.tex | sed s/$/'\\\\'/ >wc.tex the command line output of texcount is suppressed. [131058080050] |How can I get the output of the first command to be displayed in the terminal and piped to sed? [131058090010] |You need the "tee" command, which allows you to split pipes. [131058090020] |This would leave you the additional output.txt file. [131058090030] |Read this for more info: http://www.unixtutorial.org/2007/12/tee-replicate-standard-output/ You can also do "man tee". [131058100010] |You can use a anonymous pipe for the second command: [131058110010] |You can use the fairly idiomatic [131058110020] |/dev/tty is a magic device that refers to the controlling terminal of the current process. [131058120010] |Install new kernel in debian powerpc [131058120020] |I compiled a new kernel image on x86 using cross compiler with these commands. http://www.thebitsource.com/programming-software-development/how-to-install-debian-linux-on-an-embedded-system/ [131058120030] |Now, I want to install this new image on a debian-powerpc. [131058120040] |Debian-powerpc uses quik as the default bootloader. [131058120050] |Thanks, [131058130010] |History command inside bash script. [131058130020] |Hi all, I have been bashing my head to write a simple history script for the last two days. [131058130030] |History is a shell-built in command I couldn't able to use that within a BASH script. [131058130040] |So, Is there a way attain this using BASH script ? [131058130050] |Here we go my script for you: [131058140010] |I'm not sure if it actually uses the history capability when running non-interactively, otherwise every shell script you run would clutter up your command history. [131058140020] |Why not go directly to the source ${HOME}/.bash_history, replace history | tail -100 with tail -100 ${HOME}/.bash_history. [131058140030] |(If you use timestamps you'd probably have to do something along the lines of grep -v ^# ${HOME}/.bash_history | tail -100). [131058150010] |The history builtin seems to be disabled inside a shell script. [131058150020] |See here: http://www.tldp.org/LDP/abs/html/histcommands.html [131058150030] |I have not found any official documentation about this. [131058160010] |Bash disables history in noninteractive shells by default, but you can turn it on. [131058160020] |But if you're trying to monitor activity on that server, the shell history is useless (it's trivial to run commands that don't show up in the history). [131058160030] |See How can I log all process launches in Linux. [131058170010] |How to build a python egg for a TRAC plugin? [131058170020] |I'd like to install a mercurial plugin for Trac. [131058170030] |The manual provides an svn path with the source code. [131058170040] |I need to create an "egg" from that source code. [131058170050] |Do I need to have SVN installed in order to do this, or can this be done in a different way? [131058170060] |(I'm very new to Linux, I've been using it only for the past two weeks.) [131058180010] |If you scroll down on the page you linked to, you'll see that they have it so you can download a zip file snapshot of each version. [131058180020] |The rest of the instructions are on that page as well, for how to build the egg. [131058190010] |How to get lilbmp3lame encoding to work with ffmpeg? [131058190020] |When I am converting an FLV to an AVI one, I always get: [131058190030] |But I've installed it with Ubuntu Software Center. [131058190040] |I am using this command: [131058190050] |How do I get this to work? [131058200010] |Run (as root) apt-get install libavcodec-extra-52. [131058200020] |If that doesn't work, ensure that you have multiverse enabled, do apt-get update and try again. [131058200030] |[note] This is for Ubuntu 10.10 (Maverick). [131058200040] |See if you can find a similarly-named package if you use a different Ubuntu release, by running aptitude search liavcodec-extra. [131058210010] |What are the different software packaging formats and which distributions support them as part of base install? [131058210020] |Which distros support RPMs? [131058210030] |Which distros support DEB packages? [131058210040] |What other distros and package formats exists? [131058210050] |Here is what I think I know, but am interested in getting a more precise answer: [131058210060] |
  • RPM - Redhat, Fedora, CentOS, SUSE
  • [131058210070] |
  • DEB - Debian, Ubuntu,
  • [131058210080] |
  • Pacman - Arch
  • [131058210090] |
  • tar.gz - all, depends on what is in the compressed file though
  • [131058210100] |Lots of others probably in the long tail. [131058210110] |I know you can use 'alien' to convert RPM to DEB, but it isn't installed by default in new Ubuntu setup so not counting that. [131058220010] |There's a list of Linux Package Formats on wikipedia here and a list of common package management systems here. [131058230010] |Setting SVN permissions with davsvnauthz [131058230020] |There seems to be a path inheritance issue which is boggling me over access restrictions. [131058230030] |For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. [131058230040] |Here is an example of what I'm trying to achieve in dav_svn.authz [131058230050] |What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. [131058230060] |What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing [131058230070] |If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. [131058230080] |Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [131058230090] |Which yields the exact same result. [131058230100] |According to the documentation I'm following all protocols so this is insane. [131058230110] |Running on Apache2 with dav_svn [131058240010] |After a bunch of headaches, I let this idle with * = rw at SVNParentPath level. [131058240020] |Coming back to it, I suddenly had a stroke of obvious hit me; the read order was the issue. [131058240030] |Firstly, my example naming conventions were flat out wrong as it should be [:]. [131058240040] |My actual conventions were correct so syntax is not the root. [131058240050] |The main issue is the authz file expects an order of 'specificity' where the first read rule, or available match is applied. [131058240060] |In my case, everything would match with the root and it would be one-and-done. thus by reversing my example ordering: [131058240070] |would make it accepted and perform as behaved. [131058240080] |This is behavior NOT DOCUMENTED and in my opinion is a serious snafu over something utterly trivial. [131058250010] |Why is badblocks segfaulting? [131058250020] |I am trying to check a mounted partition to see if the drive has errors: [131058250030] |Uh oh. [131058250040] |What does this mean? [131058250050] |Why is badblocks segfaulting? [131058250060] |Can I fix it? [131058250070] |(System is CentOS release 4.6, drive is an SATA drive) [131058250080] |EDIT: Using strace: [131058260010] |The last few lines of that strace tell a fairly boring tale: badblocks opens the drive device, gets its size, closes it, reopens it and then goes off to do some work, which fails in some way strace doesn't show. [131058260020] |You'd have to use gdb or similar to dig deeper. [131058260030] |Your symptom may go away if you unmount the partition so badblocks has a stable thing to work on. [131058260040] |Obviously this shouldn't be required just to do the read-only test you're attempting, but it wouldn't be the first time that some low-level uncommonly-used operation didn't work as it should. [131058260050] |Bonus: If you unmount the partition, you can use badblocks -n, which is far more effective at finding and fixing disk surface problems. [131058270010] |Turned out this was a numbskull error, looks like my copy of badblocks may have just had a bug. [131058270020] |I ran yum update and after that, badblocks no longer segfaults. [131058280010] |Is there a CD/DVD disk reading test tool for Linux? [131058280020] |When I was using DOS and Windows I've seen quite a selection of tools to check optical disks for readability and benchmark an optical drive itself. [131058280030] |Most of them were even visualising the results in form of a pretty chart. [131058280040] |Are there any such for GNU/Linux OSes? [131058280050] |I'd prefer to have a full-featured visual GUI tool, but for the particular case I've got now, I just need to check if my CD drive can read every byte of a particular heavily-scratched CD-RW disk. [131058290010] |To simply see if a drive can be read, you can use dd(1). [131058290020] |This will read in the contents of the CDROM and will ignore/discard the data (note that the CDROM device may have another name on your system): [131058290030] |It is also possible to compare this to an ISO image: [131058290040] |This will print a checksum for the CD and for the ISO file. [131058290050] |If the checksums match, the CD contents match the ISO image. [131058300010] |I've used dvdisaster to help me recover data from a few DVD and CD-R coasters I burned. [131058300020] |It's a GTK application, and probably available as a package on your favorite Linux distribution. [131058300030] |It has a nice graphical display showing which sectors are good and bad. [131058300040] |It also keeps various statistics while reading your media. [131058310010] |How well does alien work for converting packages? [131058310020] |Is it feasible to build an RPM package and then utilize alien to create the DEB package rather than investing time in building a DEB package? [131058310030] |Or do certain pieces not translate well? [131058320010] |Yes, it is feasible. [131058320020] |However, you'd probably be better off using an application like checkinstall to create both package types for your users. [131058320030] |There's a few howtos out there, this one on lwn.net and this one on linuxjournal.com. [131058330010] |I do the opposite (DEB->RPM) and it works fine. [131058340010] |Alien is good in some cases, i.e. you want to install a package fast and there is only a DEB or RPM for that package. [131058340020] |From my experience Alien is not reliable for deploying a package in a distro. i.e. you created a RPM package from your project, and you now want to create a DEB package as well, and not wanting to spend time learning how deb packaging works. [131058340030] |And you just use Alien. (it might work well but it has limitations, it depends on what package you throw at it) [131058340040] |What I recommend: If you want to build packages for multiple Linux distributions and multiple architectures the way to go is to use openSUSE Build Service(OBS) [131058340050] |It's philosophy is: "Maintain sources once, offer binaries for any platform". [131058340060] |For an overview on what you can do with it watch http://www.youtube.com/watch?v=pjOUX0WFkkk , also see OBS Build Tutorial [131058350010] |It's worth trying, if you have no other starting point. [131058350020] |When it works, it works well. [131058350030] |But if you think there will be other people who would benefit from the package, it's worth investing the effort to publish a native one. [131058360010] |Why does '/' have an '..' entry? [131058360020] |This just baffles me. [131058360030] |Why does the root directory contain a reference to a parent directory? [131058360040] |I understand how directories are managed in the filesystem - each directory has n+2 pointers to itself. [131058360050] |One for each immediate subdirectory, one for its parent, and one for itself. [131058360060] |But what is /'s parent? [131058370010] |/.. points to /: [131058370020] |On this system, both have the same inode number, 2. [131058370030] |It's done for consistency. [131058370040] |This way, there doesn't have to be code in the kernel to check where it currently is when it processes a .. in a path. [131058370050] |You can say cd .. forever, and never go deeper than the root. [131058380010] |It's there because it's a guarantee made by Unix: every directory contains two entries, . which refers to itself, and .. which refers to the parent. [131058380020] |The root directory of the current namespace is special, in that .. points to the same thing as ., but not so special to break the guarantee made by the OS to programs. [131058380030] |When those contracts are broken, things go wrong and everyone points fingers. [131058380040] |The root directory that you see might, in the filesystem on disk, actually have a different parent directory. [131058380050] |The view of the filesystems provided in the mounted namespace is what enforces the .. = . rule for /. [131058380060] |So if you're in a chroot() jail, you will see /.. = / even though someone outside the jail looking at /path/to/jail/.. will see /path/to instead. [131058390010] |More doubts in bzImage [131058390020] |The description of the bzImage in Wikipedia is really confusing me. [131058390030] |The above picture is from Wikipedia, but the line next to it is: [131058390040] |The bzImage file is in a specific format: It contains concatenated bootsect.o + setup.o + misc.o + piggy.o. [131058390050] |I can't find the others (misc.o and piggy.o) in the image. [131058390060] |I would also like to get more clarity on these object files. [131058390070] |The info on this post about why we can't boot a vmlinux file is also really confusing me. [131058390080] |Another doubt is regarding the System.map. [131058390090] |How is it linked to the bzImage? [131058390100] |I know it contains the symbols of vmlinux before creating bzImage. [131058390110] |But then at the time of booting, how does bzImage get attached to the System.map? [131058400010] |Till Linux 2.6.22, bzImage contained: [131058400020] |
  • bbootsect (bootsect.o):
  • [131058400030] |
  • bsetup (setup.o)
  • [131058400040] |
  • bvmlinux (head.o, misc.o, piggy.o)
  • [131058400050] |Linux 2.6.23 merged bbootsect and bsetup into one (header.o). [131058400060] |At boot up, the kernel needs to initialize some sequences (see the header file above) which are only necessary to bring the system into a desired, usable state. [131058400070] |At runtime, those sequences are not important anymore (so why include them into the running kernel?). [131058400080] |System.map stands in relation with vmlinux, bzImage is just the compressed container, out of which vmlinux gets extracted at boot time (=> bzImage doesn't really care about System.map). [131058400090] |Linux 2.5.39 intruduced CONFIG_KALLSYMS. [131058400100] |If enabled, the kernel keeps it's own map of symbols (/proc/kallsyms). [131058400110] |System.map is primary used by user space programs like klogd and ksymoops for debugging purposes. [131058400120] |Where to put System.map depends on the user space programs which consults it. ksymoops tries to get the symbol map either from /proc/ksyms or /usr/src/linux/System.map. klogd searches in /boot/System.map, /System.map and /usr/src/linux/System.map. [131058400130] |Removing /boot/System.map generated no problems on a Linux system with kernel 2.6.27.19 . [131058410010] |How are directories managed in the filesystem? [131058410020] |On behalf of this particular question I would like to know how directories are managed in the file system. [131058410030] |What does the author of the above question mean by telling [131058410040] |each directory has n+2 pointers to itself [131058410050] |I would like to get more clarity and info on this. [131058420010] |As far as I can tell, any directory that contains n sub-directories has n+2 links to itself. [131058420020] |Every directory has a '.' entry that's a link to itself. [131058420030] |Every directory's parent has a link to it. [131058420040] |That's 2 links. [131058420050] |Every sub-directory has a '..' in it, which is a link to the directory in question. [131058420060] |If your directory has n directories in it, that's n links. [131058420070] |So a total of n+2 links to any given directory. [131058430010] |Linux filesystems are all POSIX compliant and rely on an inode pointer structure to represent directory relations. [131058430020] |Apart from the above Wikipedia link, you can have a look at the POSIX inode description, or the IBM article on 'The anatomy of the Linux filesystem'. [131058440010] |How to view a TTF font file? [131058440020] |Is there an application to simply preview a font from a TTF file without installing it? [131058450010] |gnome-font-viewer (part of GNOME of course) can do this (this is the default association for fonts under GNOME); indeed, it comes with a button to install the font, which obviously wouldn't make sense if the font needed to be installed already. [131058450020] |fontmatrix lets you organize groups of fonts to be installed or uninstalled, and you can preview them and see their features, whether installed or not. [131058450030] |Most font editors, like fontforge, certainly don't require the fonts to be installed to open them up and look at them... [131058450040] |There are others, I'm sure. [131058460010] |Bash script to find and kill a process with certain arguments? [131058460020] |I want a script which kills the instance(s) of ssh which are run with the -D argument (setting up a local proxy). [131058460030] |Manually, I do ps -A | grep -i ssh, look for the instance(s) with -D, and kill -9 {id} each one. [131058460040] |But what does that look like in bash script form? [131058460050] |(I am on Mac OS X but will install any necessary commands via port) [131058470010] |Run pgrep -f "ssh.*-D" and see if that returns the correct process ID. [131058470020] |If it does, simply change pgrep to pkill and keep the same options and pattern [131058470030] |Also, you shouldn't use kill -9 aka SIGKILL unless absolutely necessary because programs can't trap SIGKILL to clean up after themselves before they exit. [131058470040] |I only use kill -9 after first trying -1 -2 and -3. [131058480010] |You can leverage the proc file system to gather the information. [131058480020] |For example: [131058480030] |It's not perfect, you'll want a more exclusive regex (especially if you are killing processes) but echo $proc | awk -F'/' '{ print $3 }' will show you the PID of the process(es). [131058490010] |Also, [131058500010] |Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work?? [131058500020] |Hey all, [131058500030] |I'm trying to learn more about library versioning in Linux and how to put it all to work. [131058500040] |Here's the context: [131058500050] |-- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. [131058500060] |-- An application is linked against libsome1.so. [131058500070] |-- This application uses libdl.so to dynamically load another module, say libmagic.so. [131058500080] |-- Now libmagic.so is linked against libsome2.so. [131058500090] |Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. [131058500100] |This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. [131058500110] |-- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. [131058500120] |This works... [131058500130] |Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). [131058500140] |-- However, when some data structures are serialized to disk, I noticed some corruption. [131058500150] |In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. [131058500160] |I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. [131058500170] |I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2)... [131058500180] |Nothing seems to work!!! [131058500190] |Help!!!!!! [131058510010] |This doesn't exactly answer your question, but... [131058510020] |First of all, ELF is the specification use by Linux for executable files (programs), shared libraries, and also object files which are the intermediate files found when compiling software. [131058510030] |Object files end in .o, shared libraries end with .so followed by zero or more digits separated by periods, and executable files don't have any extension normally. [131058510040] |There are typically three forms to name a shared library, the first form simply ends in .so. [131058510050] |For example, a library called readline is stored in a file called libreadline.so and is located under one of /lib, /usr/lib, or /usr/local/lib normally. [131058510060] |That file is located when compiling software with an option like -lreadline. -l tells the compiler to link with the following library. [131058510070] |Because libraries change from time to time, it may become obsolete so libraries embed something called a SONAME. [131058510080] |The SONAME for readline might look like libreadline.so.2 for the second version major version of libreadline. [131058510090] |There may also be many minor versions of readline that are compatible and do not require software to be recompiled. [131058510100] |A minor version of readline might be named libreadline.so.2.14. [131058510110] |Normally libreadline.so is just a symbolic link to the most recent major version of readline, libreadline.so.2 in this case. libreadline.so.2 is also a symbolic link to libreadline.so.2.14 which is actually the file being used. [131058510120] |The SONAME of a library is embedded inside the library file itself. [131058510130] |Somewhere inside the file libreadline.so.2.14 is the string libreadline.so.2. [131058510140] |When a program is compiled and linked with readline, it will look for the file libreadline.so and read the SONAME embedded in it. [131058510150] |Later, when the program is actually executed, it will load libreadline.so.2, not just libreadline.so, since that was the SONAME that was read when it was first linked. [131058510160] |This allows a system to have multiple incompatible versions of readline installed, and each program will load the appropriate major version it was linked with. [131058510170] |Also, when upgrading readline, say, to 2.17, I can just install libreadline.so.2.17 alongside the existing library, and once I move the symbolic link libreadline.so.2 from libreadline.so.2.13 to libreadline.so.2.17, all software using that same major version will now see the new minor update to it. [131058520010] |Looking for an old classical Unix toolkit textbook [131058520020] |I am looking for a book about the Unix command-line toolkit (sh, grep, sed, awk, cut, etc.) that I read some time ago. [131058520030] |It was an excellent book, but I totally forgot its name. [131058520040] |The great thing about this specific book was the running example. [131058520050] |It showed how to implement a university bookkeeping system using only text-processing tools. [131058520060] |You would find a student by name with grep, update grades with sed, calculate average grades with awk, attach grades to IDs with cut, and so on. [131058520070] |If my memory serve, this book had a black cover, and was published circa 1980. [131058520080] |Does anyone remember this book? [131058520090] |I would appreciate any help in finding it. [131058530010] |Black cover? [131058530020] |You sure you don't mean the Unix and Linux System Administrators Handbook? [131058530030] |Its pretty much the Unix bible. [131058530040] |Not sure if it has that example you are speaking of though. [131058530050] |Its been a long time since I opened that book. [131058530060] |Edit: Just a note, the cover has changed slightly over the years. [131058530070] |It used to be purple if I remember right. [131058540010] |Using UNIX by Example, P.C Poole &N. Poole ? [131058540020] |http://books.google.co.uk/books?ei=dhs4TZrROcnpgQfq4bTGCA&ct=result&id=LK9QAAAAMAAJ&dq=grep+student+name&q=grades#search_anchor [131058540030] |and on Amazon at http://www.amazon.com/Using-Unix-Example-P-Poole/dp/0201185350 [131058550010] |It sounds vaguely like "UNIX Shell Programming" by Stephen Kochan and Patrick Wood. [131058550020] |The book uses creating a phonebook or rolodex to illustrate the use of various commands in building shell scripts. [131058550030] |The original edition came out in...uhm...1990? [131058550040] |Cover was a dark purple, darker than the one pictured in the amazon link below. [131058550050] |http://www.amazon.com/Unix-Shell-Programming-Stephen-Kochan/dp/0672324903 [131058560010] |How do I check if a file already has line with "contents" in it? [131058560020] |I need to know if a file aready has line with contents X in it, if not append line. here's the code I've tried. [131058570010] |You may have better luck with awk's index function: it's an equivalent of strstr, so should be well-suited for comparision (as opposed to grep which is for pattern matching). [131058580010] |The $(...) will return the result of the command, not the errorlevel value. [131058580020] |Use without command substitution to get the proper return code. [131058590010] |To search for a literal string: [131058590020] |-q suppresses grep output (you're only interested in the return status). -x requires the whole line to match. -F searches for a literal string rather than a regexp. -e is a precaution in case $line starts with a -. [131058600010] |You can do this with sed. [131058600020] |The issue is that you need to include the string in the code twice. [131058600030] |Once to test it, again to insert it. [131058610010] |What's the limit on the no. of partitions I can have? [131058610020] |I would like to know how many Primary and Extended Partitions can i create in a x86_64 PC with Linux running on it? [131058610030] |Update : If there is limit to the number of partitions, then why is it so? [131058620010] |4 primary partitions or alternatively 3 primary partitions and an extended partition. [131058620020] |the extended partition can be subdivided into multiple logical partitions [131058630010] |The limitation is due to the original BIOS design. [131058630020] |At that time, people weren't thinking more than four different OSes would be installed on a single disk. [131058630030] |There was also a misunderstanding of the standard by OS implementors, notably Microsoft and Linux which erroneously map file systems with (primary) partitions instead of subdividing their own partition in slices like BSD and Solaris which was the original intended goal. [131058630040] |The maximum number of logical partitions is unlimited by the standard but the number of reachable ones depends on the OS. Windows is limited by the number of letters in the alphabet, Linux used to have 63 slots with the IDE driver (hda1 to hda63) but modern releases standardize on the sd drivers which supports by default 15 slots (sda1 to sda15). [131058630050] |By some tuning, this limit can be overcome but might confuse tools (see http://www.justlinux.com/forum/showthread.php?t=152404 ) [131058630060] |In any case, this is becoming history with EFI/GPT. [131058630070] |Recent Linuxes support GPT with which you can have 128 partitions by default. [131058630080] |To fully use large disks (2TB and more) you'll need GPT anyway. [131058640010] |Sen, in response to @jlliagre, it should be noted that some operating systems will create a single partition, but essentially create sub-partitions within that space. [131058640020] |It is analogous, but not equal, to doing: [131058640030] |You could then use kpartx to access these sub-partitions: [131058640040] |The sub-partition(s) would appear as: [131058640050] |Of course, this isn't how FreeBSD and similar systems do their slicing, exactly, but it is essentially the same thing. [131058650010] |fedora 14 xen virtual machine does not detect network [131058650020] |I'm trying to create a para-virt Fedora 14 xen vm on a host running CentOS 5.5. [131058650030] |The VM appears to initially install correctly (the installer finds eth0 and can access the installation media). [131058650040] |But when I restart the VM after the installation is completed the VM no longer sees any network, only lo. [131058650050] |The VM does have one network connection according to virt-manager. [131058650060] |The host is running CentOS 5.5 64bit - completely up-to-date The VM I am tring to install is F14 64bit (installing only the minimal group) [131058650070] |Update: After taking a look at this again, I think the problem has to do with my network setup. [131058650080] |In the server I have two NICs (eth0, eth1) that I've bonded together (bond0). [131058650090] |I'm guessing that I need to make changes to xen so it recognizes the bonded NICs, though I don't know why the guest gets a network interface during installation. [131058660010] |Distro- lightweight and easy to install [131058660020] |Ubuntu was too slow on my computer and Arch installation had many problems too. [131058660030] |Which distro do you recommend to me that's lightweight as well as easy to install? [131058670010] |There are many but this is the one that immediately comes to my mind: Damn Small Linux [131058670020] |EDIT: I also found this article about light weight distros: What's the best lightweight Linux distro? [131058670030] |Also you can try switching to Xfce instead of GNOME as your desktop environment. [131058670040] |Xfce is lot lighter than GNOME/KDE. [131058680010] |The Damn Small Linux project is dead. [131058680020] |The lead developer moved on to Tiny Core Linux [131058680030] |Although the sub 30mb distros are probably too light weight for most purposes. [131058680040] |Something a little bit bigger might work as well. [131058680050] |Puppy Linux Crunchbang Linux [131058690010] |Consider Lubuntu, which aims to be the LXDE version of Ubuntu (though not get officially recognized by Canonical as an official offspin); much lighter weight than the standard one. [131058690020] |(I also recommend Crunchbang, which has been covered already.) [131058700010] |If you have an older computer, the important thing is not to use resource-hungry applications or settings. [131058700020] |Gnome+compiz on Puppy Linux will be just as slow as on Ubuntu, if you manage to install it. [131058700030] |Pick any distribution whose installer will run on your computer, and use a lightweight window manager. [131058700040] |See for example the lightest way to have a GUI in Linux?, How to get rid of desktop environment and use a window manager only?. [131058710010] |Have you looked at SliTaz? [131058710020] |It is a very lightweight (~30MB) distribution with a very easy to use package manager, a new version is due out soon. [131058710030] |The entire OS can be loaded into RAM which makes it extremely fast. [131058710040] |Installation is very quick and simple, plus there's a very helpful forum. [131058710050] |Here's the official feature list: [131058710060] |
  • Root filesystem taking up about 100 MB and ISO image of less than 30 MB.
  • [131058710070] |
  • Ready to use Web server powered by LightTPD with CGI and PHP support.
  • [131058710080] |
  • Browse the Web with Midori or Retawq in text mode.
  • [131058710090] |
  • Sound support provided by Alsa mixer, audio player and CD ripper/encoder.
  • [131058710100] |
  • Chat, mail and FTP clients.
  • [131058710110] |
  • SSH client and server powered by Dropbear.
  • [131058710120] |
  • Database engine with SQLite.
  • [131058710130] |
  • Generate a LiveUSB device.
  • [131058710140] |
  • Tools to create, edit or burn CD or DVD images.
  • [131058710150] |
  • Elegant desktop with Openbox running on the top of Xorg/Xvesa (X server).
  • [131058710160] |
  • Homemade graphical boxes to command line utilities.
  • [131058710170] |
  • 2300 packages easily installable from the mirror.
  • [131058710180] |
  • Active and friendly community.
  • [131058710190] |I've used it myself and like it a lot. [131058720010] |You should definitely give a second try to Archlinux... [131058720020] |It's slogan is: "A simple, lightweight distribution". [131058720030] |You may object but in my opinion the installation of Arch is very simple and basic (just don't forget about the great and rich documentation available on wiki: https://wiki.archlinux.org/). [131058720040] |I can install the whole system in less then half an hour and end with a flexible and featurefull working environment having a complete control over my system! [131058720050] |With no versions problems (it is a rolling release distro) and with pacman -- a fantastic package manager that magically manages all the dependencies and other installation tasks in a simple and transparent way, archlinux is definitely worth a try! [131058720060] |If you want a clean, efficient and simple linux distribution which follows all the modern requirements archlinux is for you! [131058720070] |Post-scriptum: In case of any questions about arch you can directly contact me -- rizo[dot]isrof[at]gmai[dot]com ;) [131058730010] |Is it 'possible' to transfer a VM to the metal? [131058730020] |How can one take a VM and make it run on the machine? [131058740010] |It depends on what you use for virtualization. [131058740020] |Qemu allows you to install the OS to a partition on your hard disk and you can either boot into it or load it up in Qemu. [131058740030] |If your VM is installed to a file on your filesystem like VirtualBox does, it may be possible to convert it to a disk image that you can install to a hard disk but it's more effort on you part than what Qemu can do for you. [131058740040] |With VirtualBox there isn't any easy way to synchronize the disk partition and the VDE file so you can swap back and forth between them. [131058750010] |As others have implied, the storage mechanism matters most. [131058750020] |Some virtualization products, especially desktop virtualization products, storage data in opaque formats. [131058750030] |In that case, you'll need to extract the filesystems from the disk images. [131058750040] |Each virtualization product will have a different and sometimes proprietary way of doing this. [131058750050] |If you're building a virtualized datacenter, however, you can actually plan on making virtual machines that can be easily migrated to or from a virtualized environment. [131058750060] |In this case, you'll be best using a SAN, such as iSCSI, assigning raw block storage to your virtual machines. [131058750070] |For example, I personally create iSCSI LUNs which appear as block devices under Linux. [131058750080] |Then, I boot these machines with Xen. [131058750090] |I can easily shut these machines down and then use gPXE to boot the machine directly from the iSCSI volume. [131058750100] |This is probably not what you're looking to do, but it is possible! [131058750110] |Not important to forget, however, is that once your storage is accessible, the OS itself needs to be configured to find its devices. [131058750120] |Using UUIDs in your /etc/fstab will help, for instance. [131058750130] |If booting from a SAN, you will need a properly constructed initrd. [131058760010] |Generally, yes. [131058760020] |If your VM is running off native disks or partitions, it may be as simple as pointing your bootloader to it. [131058760030] |Otherwise, you'll need to copy the data. [131058760040] |For some VM formats, there are tools to mount the VM disk on the host (e.g. xmount). [131058760050] |For other formats, the simplest way to get the data is to treat the VM as any old machine and boot a live CD in it. [131058760060] |Then your OS must be able to boot on the metal. [131058760070] |Unix installations are generally fairly hardware-independent (as long as you stay with the same processor type). [131058760080] |You need to have the right drivers, to configure the bootloader and maybe /etc/fstab properly. [131058760090] |See for example Moving linux install to a new computer. [131058770010] |What are the differences between various VM software? [131058770020] |I have only ever used VirtualBox and I would like to know, for example, what could I be missing from other offerings. [131058770030] |I have heard of KVM and VMWare and I'm sure there's others. [131058770040] |Short of reading Wikipedia articles on each (phew!), how do they differ? [131058780010] |VirtualBox is a software application that runs on top of your OS. [131058780020] |It can use capabilities of your OS and hardware to accelerate the virtualization. [131058780030] |The VirtualBox software must remain running for the virtualized systems to remain operational. [131058780040] |Xen is a subclass of operating systems called a hypervisor, it is an OS which only provides virtualization. [131058780050] |It offloads management capabilities to a separate management OS which it calls the "dom0", usually Linux. [131058780060] |The management OS provides drivers for the physical hardware. [131058780070] |VMWare has several products. [131058780080] |VMWare Workstation works like VirtualBox, while VMWare ESX is a hypervisor similar to Xen. [131058780090] |A major difference to Xen is that ESX provides its own hardware drivers and as a result has limited hardware support. [131058780100] |KVM is a project which adds a hypervisor into the Linux kernel. [131058780110] |Because KVM uses a hypervisor, it does not need to remain running in the same fashion as VirtualBox. [131058780120] |While KVM is a hypervisor such as Xen and ESX, it is simultaneously a Linux kernel &OS of its own accord. [131058780130] |It should be noted that KVM's inclusion into Linux is often misunderstood as being generally accepted as being the "blessed way forward". [131058780140] |The KVM project is officially supported in Linux as it is a Linux kernel modification, while Xen and ESX are entirely separate operating systems. [131058790010] |I would classify virtual machine technologies into three categories (not all products fit clearly into one category): [131058790020] |
  • Full virtualization, i.e., complete hardware emulation. [131058790030] |Examples: Qemu, Dosbox. [131058790040] |Pro: you can potentially emulate any architecture on any hardware. [131058790050] |Con: it's the slowest way to do it.
  • [131058790060] |
  • Hardware-assisted virtualization, where you can emulate machine X on machine X. This can be a lot faster than full virtualization, because most instructions are executed natively, but you lose the ability to run a foreign architecture. [131058790070] |There are two sub-categories: [131058790080] |
  • Hypervisor-based VMs: you run several OSes alongside each other. [131058790090] |The bottom layer, called the hypervisor, is a special-purpose OS that runs the VMs and nothing else. [131058790100] |Examples: Xen, VMware ESX.
  • [131058790110] |
  • Hosted VMs: there is a main OS, the VM is an application on this main OS. Examples: VirtualBox, KVM.
  • [131058790120] |
  • OS-level virtualization: you run several instances of the same OS. [131058790130] |This can be in turn a lot more lightweight than hardware virutalization, but you lose some isolation and of course the ability to run different OSes. [131058790140] |Examples: OpenVZ, FreeBSD jails.
  • [131058790150] |First determine the category that corresponds to your needs.