[131076530010] |Difference between ' and " on command line (bash)? [131076530020] |I used to use '' and "" interchangeably on the command line, but I recently noticed that '$HOME/some/dir' is not expanded, while "$HOME/some/dir" is. [131076530030] |I searched around a little bit and found that "" allows some protection of special characters, while '' allows full protection. [131076530040] |So what other characters are protected by '' and not ""? [131076530050] |EDIT: What are practical situations in which these differences might be significant? [131076540010] |Take a look at the bash man page. [131076540020] |There's an entire section on quoting. [131076540030] |Because this licensed under the GFDL, which is not compatible with the CC-BY-SA license used here, I won't quote the whole thing, but really reading that is the most definitive answer. [131076540040] |In summary, single quotes stop all interpretation -- the string is rendered literally. [131076540050] |Double quotes leave $ (dollar sign), ` (backquote) as special, and \ (backslash) as special when followed by certain other characters. [131076540060] |And ! will be treated specially if the history expansion feature is enabled (which it is by default). [131076540070] |In practical use, the $ is the big deal, as one often may want the various expansions it enables to (variables and more), while still preventing the shell from muddling most of the command line. [131076550010] |Have a look on to this question in StackExchange's AskUbuntu. [131076560010] |Dual-boot OS X and Fedora [131076560020] |I'm trying to install Fedora (dual-boot) on a brand-new MacBook Pro. [131076560030] |The live/installer CD runs flawlessly on the machine, which was a nice surprise. [131076560040] |But after the install is completed, I cannot actually boot into Fedora. [131076560050] |Here is what I did: [131076560060] |
  • OS X by default ships with an EFI partition (/dev/sda1) and an HFS+ partition (/dev/sda2). [131076560070] |I used Boot Camp in OS X to resize the existing OS X partition and reboot to the live CD.
  • [131076560080] |
  • Once the Fedora live CD is booted, I used the disk utility to delete the "Windows" partition (/dev/sda3) that Boot Camp created. [131076560090] |This leaves the original EFI and OS X partitions plus a chunk (40G) of empty space at the end of the disk.
  • [131076560100] |
  • Run the Fedora Installer, telling it to automatically partition and install using the free space on the disk. [131076560110] |It creates a 500M boot partition (/dev/sda3) and an LVM partition (/dev/sda4).
  • [131076560120] |
  • When the installer gets to the part about the boot loader, I go with the defaults, which is to install GRUB to the boot sector of the Fedora boot partition (/dev/sda3).
  • [131076560130] |After the install is complete, I would expect to be able to hold down the Mac's "Option" key to choose which OS to boot since /dev/sda3 has a boot sector, but only the OS X disk shows up. [131076560140] |(What does Boot Camp do, then, besides resizing the disk?) [131076560150] |I read somewhere that a boot loader called rEFIt would help, so I tried that. [131076560160] |I installed it using the instructions on the project's website and rebooted. [131076560170] |An option to boot Linux shows up in the rEFIt boot screen, but when selecting it, I get a black screen with an error message along the lines of "no operating system found." [131076560180] |Further research suggested that rEFIt had to "resync" in order to find GRUB, even though I installed rEFIt after Fedora. [131076560190] |I did that by going into the rEFIt "menu" and answering in the affirmative when it told me that a resync was necessary. [131076560200] |Now, when I select Linux in the rEFIt boot menu, it only shows a gray-out penguin and nothing else happens. [131076560210] |The lack of output makes the problem difficult to troubleshoot exactly what's going wrong. [131076560220] |Maybe if I knew more about EFI, I could play around in the EFI shell provided by rEFIt. [131076560230] |Anyway, has anyone here had success dual-booting Fedora 14 on a newer Mac, and if so, what procedure did you use? [131076560240] |I'd like to install Fedora to the whole disk, but I'm reluctant to try that until I can get a dual-boot scenario to work. [131076560250] |I you could, please CC my email address in any responses since I'm subscribed to the digest version of the list. [131076560260] |Thanks for your time. [131076570010] |Faster way than ping for checking if computer online? [131076570020] |I'm writing a wake on lan script for a set of our lab computers. [131076570030] |We have sqlite db with a list of the computer hostnames, IPs, and MACs and currently I ping each of them with '-c1' so it doesn't run endlessly - but even that takes some waiting, is there a quicker way to get answer rather than ping? [131076570040] |Using ping seems to slow the script quite a bit as it needs the ping answers to continue. [131076570050] |Thanks much for any suggestions! [131076580010] |What you could do it ping the broadcast address which should cause all the computers to ping back. [131076580020] |Then you could cross check this list against what you have in SQLite to ensure all the computers are up. [131076580030] |Other then that a ping is probably the fastest way to ensure a computer is awake on a network. [131076580040] |As mentioned by the other answer this doesn't provide any really useful data. [131076580050] |If you have the ability to install scripts you can add a cronjob to ping a central server, run a task, or just echo out the process list to a central server which will log the request. [131076580060] |Then simply checking that will tell you if you have any issues with no need to manually check every time. [131076590010] |Sending a single packet and waiting for a response is going to be one of the fastest possible ways, and ping is a fine way to do that. [131076590020] |In fact, depending on your use case, I'd argue that it's too fast, since it doesn't really tell you if the system is actually doing anything useful, just that the kernel's network subsystem is alive and configured. [131076590030] |But assuming that's good enough, you can make some improvements. [131076590040] |First, you could use -W1 to decrease the ping timeout to one second. [131076590050] |Second, you could make your script ping the different hosts asynchronously (in a background thread), and check the results as needed rather than waiting. [131076590060] |Alternately, you can re-think the approach and have the remote systems check in somehow when they're up, and if a system hasn't checked in, you can assume it's down. [131076600010] |Ganglia uses multicast traffic to monitor many hosts in a cluster, perhaps you could use something similar? [131076600020] |This assumes that your networking hardware allows multicast traffic between all the hosts and your monitoring system. [131076610010] |This would only work for one or two computers, but if you connect them directly to the computer responsible for checking their status, you can use ethtool to see if the link is active or not. [131076620010] |This is what fping was designed for. http://fping.sourceforge.net/ [131076620020] |You need to parse the output afterwards instead of relying on a return code, but it is much faster than doing normal ping. [131076630010] |How to stop application from being suspended by Ctrl+z? [131076630020] |Currently I'm running dvtm inside a terminal, and vim inside dvtm. [131076630030] |When I press Ctrl+z intending to suspend vim, dvtm got suspended instead. [131076630040] |I didn't have this problem with screen or tmux, so I think it must be dvtm doing something wrong (or not doing something right). [131076630050] |How can I fix that? [131076630060] |Update: I was wrong, this is not a problem with dvtm. [131076630070] |Indeed I was using the dtach+dvtm combo and wrongly assumed that dvtm was at fault. [131076630080] |The problem is really with dtach. [131076640010] |Update (new answer): [131076640020] |dtach has a -z option with the description "Disable processing of the suspend key". [131076640030] |Confusing if you ask me, but its effect seems to be that the Ctrl+Z is passed through to Vim instead of being caught by dtach. [131076640040] |More general answer: a program like dtach or dvtm has pretty much absolute control over what gets passed through to whatever's running "inside" of it. [131076640050] |It's like having a secretary take dictation -- you can say what you like, but the secretary controls what actually appears on the paper. [131076640060] |So if you want the end program to receive the Ctrl+Z, you have to get all of the middle layers to cooperate, whether that be through command line options or source code editing. [131076640070] |Looks to me like this is a bug in dvtm. [131076640080] |From inspection of the source code, it's not catching the SIGTSTP signal, which is what is sent to your terminal's foreground application when you press Ctrl+z. [131076640090] |Since it's not catching the signal explicitly, it falls back on default behavior and gets suspended. [131076640100] |What dvtm needs to do is catch this signal and pass it along to one of its windows. [131076650010] |How to run a shell script containing an awk command [131076650020] |How to run this script (called count.sh)? [131076650030] |I'm trying to run it with sh count.sh but it's giving me an error. [131076660010] |This is what I've done for my own scripts: [131076660020] |The only drawback to this approach is that the path to Awk is hard-coded. [131076660030] |This will break when the script is exported via NFS, and the NFS client has Awk installed in a different directory (say, /bin/awk versus /usr/bin/awk.) [131076660040] |Setting the executable permission will allow you to directly call the script, without having to use the shell to launch it (you can also leave off the .sh extension, since by convention scripts don't include them): [131076670010] |I ran you script, but I didn't get any error (although I expected to :) ) [131076670020] |What error did you get? [131076670030] |I expected awk will ask for an input file to process; learned a new thingie :) [131076680010] |Is there a tmux shortcut to go read only? [131076680020] |I've been using screen for years now as a way of ensuring that any remote work is safely kept open in after disconnects/crashes. [131076680030] |In fact, as a matter of course, I use screens even when working locally. [131076680040] |Recently, my requirements have progressed to the stage that I switched to tmux because of the beauty of: [131076680050] |Attaching to my own sessions in readonly mode (-r) means that I don't have to worry about accidentally: [131076680060] |
  • pasting lines of garbage in IRC
  • [131076680070] |
  • halting an important compile/deploy process
  • [131076680080] |
  • typing a password in full view for passersby
  • [131076680090] |Of course the issue is that I have to open a session, C-b + d to detach, and then reopen it with the -r flag to go readonly. [131076680100] |And then, when I occasionally want to chime in to an IRC conversation, interrupt a task or anything else, I have to detach again and reconnect normally. [131076680110] |Does anyone know of a way to make a key binding to switch between modes? [131076690010] |Not according to the man page, which only calls out the attach -r option to enable read-only mode. [131076690020] |Also, in the source code, only the following line in cmd-attach-session.c sets the read only flag. [131076690030] |The rest of the code checks whether this flag is set, but does not change its value. [131076690040] |So again, it looks like you are out of luck unless you can make (or request) a code change: [131076700010] |Simple example of using Fedora alternatives to install old version of make [131076700020] |Can anyone help me using the Fedora alternatives system in order to install an old version of make? [131076700030] |I know the actual program is irrelevant, but I need it so I'll use it as my example. [131076700040] |I currently have make-3.82 installed on my Fedora 14 box, but I need to have 3.81 installed to build the android kernel. [131076700050] |I already downloaded the 3.81 source and built it, but now I want to install it alongside 3.82 and be able to switch between them using Fedora alternatives. [131076700060] |Now I installed make-3.81 from source into /usr/local, how would I use alternatives to achieve my goal? [131076700070] |I know I must use the alternatives command, but so far my attempts have failed and I would like a concrete example. [131076710010] |
  • You need to 'alternative-ize' the original make. [131076710020] |Change /usr/bin/make and /usr/bin/gmake to /usr/bin/make-3.82 and /usr/bin/gmake-3.82. [131076710030] |Realize that it'll get blown away next time make is upgraded, because Fedora does not use the alternatives for make.
  • [131076710040] |
  • Create an alternative for it. alternatives --install /usr/bin/make make /usr/bin/make-3.82 10 for make. [131076710050] |I used 10 as the priority to make it a lower priority than the next step.
  • [131076710060] |
  • Now create an alternative for your new make. alternatives --install /usr/bin/make make /usr/local/make-3.81/bin/make 20 (assuming you installed it in /usr/local/make-3.81). notice I used a higher priority, 20.
  • [131076710070] |
  • Use alternatives --display make to see what is being used.
  • [131076720010] |ImportError when running WSGI app in Apache: undefined symbol: PyUnicodeUCS2_DecodeUTF8 [131076720020] |I'm trying to run MoinMoin under Apache on SLES 11P1. [131076720030] |I'm getting the following error in my Apache log when somebody tries to access the site: [131076720040] |However, I have no problem if I execute the failing statement manually from a Python interpreter: [131076720050] |What could cause that error when running under Apache, but not when running Python at the command-line? [131076720060] |Edit: It looks like they are hitting different shared libraries. [131076720070] |Apache is hitting /usr/lib64/python2.6/lib-dynload/cPickle.so and the Python interpreter that I invoke at the command-line is hitting /usr/lib/python2.6/lib-dynload/cPickle.so [131076730010] |What documentation shows the associated numbers for linux signals like SIGTERM and SIGKILL? [131076730020] |I can look around the internet and find that the associated number for SIGKILL is 9, but is there linux documentation (such as, a man page) showing the complete list of signals and their numbers? [131076740010] |man 7 signal [131076750010] |The signal(7) man page (at least the one I have) shows multiple possible numbers for some of the signals. [131076750020] |If you can get kill to list the numbers, they should be correct for the running system. [131076750030] |Try: [131076750040] |or [131076750050] |In both cases, that's an "ell" (for "list"), not a "one". [131076750060] |Bash's built-in kill -l shows a nice numbered table. [131076750070] |Linux's procps kill -l shows a terse list of names without numbers, but -L shows a table with the numbers. [131076750080] |BSD kill seems to only understand -l, and won't show you the numbers. [131076760010] |Also look in /usr/include/signal.h and either scan the #define or #include statements (which on Ubuntu 10.10 tells you to look in /usr/include/bits/signum.h). [131076760020] |Note that the SIGRTMIN/SIGRTMAX are placeholders for kernal and unused signals and should not be called by the shell or a program. [131076770010] |Why do some regex commands have opposite intepretations of '\' with various characters? [131076770020] |Take, for example, this command: [131076770030] |This will find all the .h and .cpp files in your directory. [131076770040] |The period character '.' in regular expressions usually means "any character". [131076770050] |To get it to match only an actual period, you must escape it using the backslash character '\'. [131076770060] |In this case, given a character with a special meaning, you must escape it to get the actual character it represents. [131076770070] |Now, take the parenthesis and the "or" bar, being characters '(', ')', and '|', respectively. [131076770080] |These also have special meanings, used for grouping regular expressions. [131076770090] |However, to get the special meaning, the characters must be escaped using the backslash! [131076770100] |Without the backslash, the characters have the meaning of the actual character it represents. [131076770110] |Why is the '.' treated differently from '(', ')', and '|'? [131076780010] |The answer is really "just because". [131076780020] |There's a whole bunch of different regular expression syntaxes, and while they share a similar appearance and usually the basics are the same, they vary in the particulars. [131076780030] |Historically, every tool had its own new implementation, doing whatever the author thought best. [131076780040] |There's a balance between making characters special with and without escaping — too many characters that are "naturally special" and you end up having to escape them all the time just to match on them; or, the other way around, you send up having a bunch of escapes to use common regex syntax like () grouping. [131076780050] |And everyone writing a program decided how to do it based on the needs of what their program matched against, on what they felt was the right approach, and on the phase of the moon. [131076780060] |There's an attempt at standardization from POSIX, which defines "basic regular expressions" and "extended regular expressions". [131076780070] |Awesomely, these work backwards from each other in regards to \sometimes, but not with perfect consistency. [131076780080] |Perl regular expressions have become another defacto standard, for two reasons: first, they're very flexible and powerful, and second, they're actually pretty sane, with conventions like "\ always escapes a non-alphanumeric character". [131076780090] |GNU Find has a -regextype option, where you can change the regular expression syntax used. [131076780100] |Sadly, "perl" is not an option, at least in the version of find I have. [131076780110] |(The default is, not surprisingly from GNU, "emacs", and that syntax is documented here.) [131076790010] |Why does Linux scale so well to different hardware platforms? [131076790020] |Why does Linux run well on so many different types of machines - desktops, laptops, servers, embedded devices, mobile phones, etc? [131076790030] |Is it mainly because the system is open, so any part of it can be modified to work in different environments? [131076790040] |Or are there other properties of the Linux kernel and/or system that make it easier for this OS to work on such a wide range of platforms? [131076800010] |I lack the detailed technical expertise to back up this answer, but my experience suggests that Linux scales well in comparison to other operating systems that I frequently use (primarily, Windows). [131076800020] |So perhaps the question is why Windows does not scale as well as Linux. [131076800030] |If restating the question that way is still useful to you, I would suggest that market forces motivate Microsoft to add features and functionality geared to the latest and most capable hardware, because they sell more copies of the operating system primarily when end users buy new systems. [131076800040] |So, at any point in time, I find that the latest release of Windows performs poorly on older, less capable hardware. [131076800050] |Forgive me if that oversimplifies your question. [131076810010] |While openness is certainly part of it, I think the key factor is Linus Torvald's continued insistence that all of the work, from big to small, has a place in the mainline Linux kernel, as long as it's well done. [131076810020] |If he'd decided at some point to draw a line and say "okay, for that fancy super-computer hardware, we need a fork", then completely separate high-end and small-system variants might have developed. [131076810030] |As it is, instead people have done the harder work of making it all play together relatively well. [131076810040] |And, kludges which enable one side of things at the detriment of the other aren't, generally, allowed in — again, forcing people to solve problems in a harder but more correct way, which turns out to usually be easier to go forward from once whatever required the kludge becomes a historical footnote. [131076810050] |From an interview several years ago: [131076810060] |Q: Linux is a versatile system. [131076810070] |It supplies PC, huge servers, mobiles and ten or so of other devices. [131076810080] |From your privileged position, which sector will be the one where Linux will express the highest potential? [131076810090] |A: I think the real power of Linux is exactly that it is not about one niche. [131076810100] |Everybody gets to play along, and different people and different companies have totally different motivations and beliefs in what is important for them. [131076810110] |So I’m not even interested in any one particular sector. [131076820010] |Linux scales to many different types of hardware because: [131076820020] |
  • it's very configurable
  • [131076820030] |
  • the source is freely available, and can be built for any CPU that has a C compiler available
  • [131076820040] |
  • processors in embedded devices and mobile phones have at least the same level or more power compared to the original 386 machines early Linux development was carried out on, and continue to gain power
  • [131076820050] |
  • it works just fine with nothing but a network, some form of disk or disk-like device, and a serial port
  • [131076830010] |The Linux Kernel scales well because that is what it is. [131076830020] |The core kernel is relatively small and does what it need to do. [131076830030] |Device drivers are optional and can be left out of the kernel for smaller systems. [131076830040] |The beyond the minimal kernel requirements most functionality is implemented as optional features. [131076830050] |Take a look at the config file installed next to most kernels. [131076830060] |It will list all the features which were turned on as well as which drivers are supported. [131076830070] |Drivers are supported either embedded in the kernel or as loadable modules. [131076830080] |This allows a kernel to be dynamically configured to the hardware it is running on. [131076830090] |This is the the approach used by many distributions. [131076830100] |Other than having the compilation tools, modifying the kernel for new hardware is relatively simple. [131076830110] |For a new processor only the related code needs to be implemented as modifications to the existing functionality. [131076830120] |New devices only require a new driver with the relevant hooks. [131076830130] |New file systems are likewise relatively trivial modifications. [131076830140] |The code base has been well maintained to keep this flexibility without external forking. [131076830150] |Conditional compilation drives a very flexible kernel which has been kept as minimal as possible. [131076840010] |Does bash support forking similar to C's fork()? [131076840020] |I have a script that I would like to fork at one point so two copies of the same script are running. [131076840030] |For example, I would like the following bash script to exist: [131076840040] |If this bash script truly existed, the expected output would be: [131076840050] |or [131076840060] |Is there something that I can put in place of "do_fork()" to get this kind of output, or to cause the bash script to do a C-like fork? [131076850010] |Yes. [131076850020] |Forking is spelled &: [131076850030] |What may be confusing you is that $$ is not the PID of the shell process, it's the PID of the original shell process. [131076850040] |The point of making it this way is that $$ is a unique identifier for a particular instance of the shell script: it doesn't change during the script's execution, and it's different from $$ in any other concurrently running script. [131076850050] |One way to get the shell process's actual PID is sh -c 'echo $PPID'. [131076850060] |The control flow in the shell isn't the same as C. [131076850070] |If in C you'd write [131076850080] |then a shell equivalent is [131076850090] |The simple shell form first; parent &child corresponds to the usual C idiom [131076850100] |&and $$ exist and behave this way in every Bourne-style shell and in (t)csh. $PPID didn't exist in the orignal Bourne shell but is in POSIX (so it's in ash, bash, ksh, zsh, …). [131076860010] |There's no native bash (or, to my knowledge, any other typical *nix shell) way of doing this. [131076860020] |There's a lot of ways to spawn forked processes that do something else asynchronously, but I don't think there's anything that follows the exact semantics of the fork() system call. [131076860030] |The typical approach would be to have your top-level script spawn off helpers that do just the work you want split out. [131076860040] |If you do $0 $@ &or whatever, you'll start at the beginning again and need to figure that out somehow. [131076860050] |I'm actually starting to think of several clever ways in which one might do just that.... [131076860060] |But, before my brain gets too carried away with that, I think a pretty good rule is: if you're trying to write something in shell and it's getting full of clever tricks and you're wishing for more language features, time to switch to a real programming language. [131076870010] |Yes, it's called subshells. [131076870020] |Shell code inside parenthesis is run as a subshell (fork). [131076870030] |However the first shell normally waits for the child to complete. [131076870040] |You can make it asynchronous using the &terminator. [131076870050] |See it in in action with something like this: [131076870060] |$ bash subsh.sh [131076880010] |vim search around line breaks [131076880020] |When I am editing a LaTeX document in vim, often times I want to search for a phrase. [131076880030] |Because I have LaTeX setup to wrap lines, sometimes half of the phrase will be on one line while the other half is on the next. [131076880040] |In such a case, the search does not find what I'm looking for. [131076880050] |So for example [131076880060] |when searching using /hello world, I don't find what I'm looking for. [131076880070] |Is there a quick and easy way to modify this search to get me where I want to be? [131076880080] |Or better yet, a way to tell vim to match any white space (space, tab, new line) with a space in the search string? [131076890010] |The way I know of is not hard, but it's a little tedious. [131076890020] |Replace every space in your search query with the following: [131076890030] |(Note the space after the [.) [131076890040] |This is regular expression matching syntax. [131076890050] |Broken down, it means: [131076890060] |
  • [...] means match any one of the list of characters inside the brackets.
  • [131076890070] |
  • \t is Tab
  • [131076890080] |
  • \n is Newline
  • [131076890090] |
  • ...\+ means match one or more of the preceding.
  • [131076890100] |For more info on regular expressions, you can ask vim: [131076900010] |I personally would use [ \t\n]* instead of spaces. [131076900020] |This will match on zero or more of ' ', tab, and newline. [131076900030] |This way if one instance of your search pattern spans a line break, but another doesn't, both will be matched. [131076910010] |After more searching, it looks like the easiest way to do this is with \_s. [131076910020] |So for example: [131076920010] |Edid information [131076920020] |Hello, [131076920030] |I want to gather the Edid information of the monitor. [131076920040] |I can get it from xorg.0.log file when I run the X with -logverbose option. [131076920050] |But the problem is that If I switch the monitor ( plug-out the current monitor and then plug-in another monitor ), then there is no way to get this information. [131076920060] |Is there any way to get the EDID dynamically ( or runtime ) ? [131076920070] |Or any utility/tool which will inform me as soon as monitor is connected and disconnected ? [131076920080] |I am using the LFS-6.4. [131076920090] |Regards, Satish [131076930010] |There is a tool called read-edid doing exactly what its name suggests. [131076940010] |Try xrandr --verbose. [131076940020] |It shows the RAW edid information and lots of other useful information for all monitors connected to your computer. [131076940030] |Example output, with only the EDID section: [131076940040] |With regards to your last question, udev can inform you and let you run commands when a monitor is connected. [131076940050] |It's really easy to write bash scripts for udev events. [131076940060] |I'm not sure what you're trying to do here, but I find xrandr very useful for automatically setting the monitor layout that I want whenever I plug or unplug external monitors at work or at home. [131076940070] |You don't need monitor serial for this. [131076940080] |The simplified output name works fine. [131076940090] |Run xrandr to see the outputs (monitors) available. [131076940100] |I run this script to set my preferred layout: [131076940110] |LVDS1 being the name of the notebook monitor, DPS2 the external one. [131076940120] |I hope this helps. [131076950010] |How to build a long command string? [131076950020] |Hi, [131076950030] |I've a sequence of commands to be used along with lot of pipings, something like this: [131076950040] |This basically filters the Apache log and prints three cols of info. [131076950050] |First col is IP address, second is time, third is a string. [131076950060] |The output could be sorted based on any column so, I need to use '-m' with sort for the time field. [131076950070] |Sorting order could also be reversed. [131076950080] |I want a string to store the arguments to sort, and let the combined strings get executed. [131076950090] |Something like this: [131076950100] |where [131076950110] |Am able to build such strings; when I echo it, it looks fine. [131076950120] |Problem is, how to execute it? [131076950130] |I get some errors like: [131076950140] |Please ignore if 'sort' doesn't sort properly, that can be fixed. [131076950150] |I wish to know how I can build the final command string in steps. [131076950160] |The ways I tried to execute this command in the script is by: [131076950170] |Edit: The script that I'm trying to use [131076960010] |When you run awk '{ ... }' on a shell prompt or from a shell script, shell parses the quotes and passes the argument to awk w/o the quotes. [131076960020] |What happens is that you somehow run it with the quotes in parameter. [131076960030] |Edit: with your update to question, what you need is sh -c "$final_cmd". [131076970010] |The different parts can be put in shell functions : [131076970020] |then you could make things optionnal more easily : [131076970030] |When your command-line is starting to be really long, writing it in a script and breaking it in parts (with functions) is usually really helpfull. [131076980010] |7z command line add file to a flat directory 7z file [131076980020] |I would like to compress file "./data/x.txt" to path "./data/x.7z". [131076980030] |When running [131076980040] |The file "./data/x.txt" holds [131076980050] |as opposed to just (what I want) [131076980060] |However, I would like 7z to ignore the path "./data" directory inside of the x.7z file. [131076980070] |To clarify, I would like 7z to flatten the directory structure in the 7z file when adding x.txt. [131076980080] |Is this possible? [131076980090] |Update [131076980100] |Figured out an alternative that works for me. [131076980110] |I am utilizing subprocess to call 7z. [131076980120] |The cwd attribute changes the working directory for the subprocess command. [131076980130] |The code below solves my example above, where 'data' is the path that I would like to add a file from. [131076990010] |One of possible solutions is to chdir to some directory before compressing. [131076990020] |For example: [131076990030] |Yet another way is using another archiver, e.g. rar. [131076990040] |It has a lot of useful command line swithes. [131076990050] |Your problem can be solved with -ep/-ep1 options: [131076990060] |or [131076990070] |The piece of rar help: [131077000010] |what dr01 answered is generally correct, but why use 7z for compressing a single file at all? [131077000020] |I'd suggest you take a look at xz or maybe even pxz, if that's available on your distro. xz works well with tar, newer versions of tar have the "-J" switch, which runs it through xz [131077000030] |In any event, you can use xz to compress a single file, just as you'd use gzip or bzip2: [131077000040] |(creates file file.txt.xz) [131077010010] |Learning about iptables: Is this Slicehost example any good? [131077010020] |In preparation to eventually launch my first website, I've been playing with Ubuntu Lucid Server in a VM on my WinXP machine. [131077010030] |I've been alternating between the Linode and Slicehost tutorials/articles for guidance, and I'm trying to make sense of the section on iptables. [131077010040] |Slicehost's example can be found here. [131077010050] |It seems deceptively simple to me, though. [131077010060] |It's just a matter of locking down everything and punching very specific holes. [131077010070] |When all other articles seem confusing and make iptables sound like a huge PITA, this example seems very straightforward (or at least, it seems that way once you know what the commands mean). [131077010080] |Is this example suitable for a production server? [131077010090] |Do the really complicated bits of iptables only really come up later? [131077020010] |Depends vastly on your needs. iptables can be quite easy to master, when you understand how it works. [131077020020] |There are three chains in the filter table that contain rules: INPUT, OUTPUT and FORWARD. [131077020030] |If you're wishing to block only packets coming into your server, then the INPUT chain is all you really need to be concerned about. [131077020040] |After that, it's just setting the appropriate criteria for what you want to block or how you want to handle connections. [131077020050] |Just remember that when you're reading the rules, it's based on first-match, which means that if a packet matches a rule before the one you actually want, it will obey that first rule. [131077020060] |So, order is important. [131077020070] |Generally, for basic INPUT filters, you'll find only a few holes punched for the services that are important, then a global catch-all that blocks everything else. [131077020080] |The example Slicehost gives is a good example of this. [131077030010] |error booting the custom compiled kernel 2.6.37 on ubuntu 10.04 : gave up waiting on root device [131077030020] |Possible Duplicate: Kernel can't find /dev/sda file during boot [131077030030] |Hi, I know this is very common problem and there are many threads discussing it. [131077030040] |However after trying many solutions my problem still persists :( [131077030050] |I have installed ubuntu Lucid which works fine.Then I downloaded and compiled kernel 2.6.37 from git tree and compiled it using normal compilation process.I created initrd image and called [131077030060] |update-grub [131077030070] |It detects my initrd image. [131077030080] |However,I get following error while booting: [131077030090] |Gave up waiting for root device. [131077030100] |Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist [131077030110] |And then control falls onto initramfs prompt. [131077030120] |I tried following solutions: [131077030130] |
  • Write GRUB_DISABLE_LINUX_UUID=true in /etc/default/grub
  • [131077030140] |
  • compile kernel using CONFIG_DEVTMPFS=y
  • [131077030150] |Still I am unable to boot from the boot the compiled kernel. [131077030160] |Could someone please suggest me some solution . [131077030170] |Thank you :) [131077040010] |Installing multiple packages with one yum command [131077040020] |Is there a way to install 2 or more packages using one yum command [131077050010] |Check this out. [131077060010] |Yes, I do it all the time. [131077060020] |Any yum command will work with multiple packages specified, just take a look at the man page. [131077070010] |Download and install latest deb package from github via terminal [131077070020] |Hi, [131077070030] |I would like to download and install the latest .deb-package from github (https://github.com/elbersb/otr-verwaltung/downloads to be exact). [131077070040] |How can I download the latest package (e.g. otrverwaltung_0.9.1_all.deb) automatically with a script from github? [131077070050] |What I have tried so far: [131077090010] |Connections to my server from non-local users are too slow [131077090020] |Hi I have a CentOS server based in Chicago which doesn't seem to be providing fast bandwidth connections to non local users. [131077090030] |Is there anywhere on the server where this limitation may be configured? [131077090040] |(I am relatively new to Linux so please forgive my ignorance on the matter) [131077100010] |One could configure traffic shaping at the kernel level, or throttle things in various other ways from within the system. [131077100020] |But nothing like that is on by default, and I think it's unlikely that someone would have configured it. [131077100030] |Can you elaborate on "doesn't seem to be providing"? [131077100040] |How have you tested? [131077100050] |Is this at an application level (for example, a web server)? [131077100060] |Are initial connections slow but subsequent use fine, or is the whole thing slow? [131077100070] |How are your local users connected and how have you tested that? [131077110010] |How to check accurately the remaining disk space on a partition? [131077110020] |I have my /home partition formatted as ext3. [131077110030] |Occassionally, some program that is part of GNOME is giving notifications about there only being 700mb of space left. [131077110040] |Nautilius tells me I have 5.6GB. [131077110050] |Disk Usage Analyzer tells me I have 10GB. [131077110060] |Which of these is most accurate, or is there another program that is more accurate? [131077110070] |What accounts for these different figures? [131077120010] |Try a differfent program; maybe this will be more accurate: [131077130010] |Disk Usage Analyzer counts up the amount of space in use by all the files. [131077130020] |Something like df asks the filesystem how much space is in use. [131077130030] |These two amounts can be very different depending on e.g. how many deleted, but still open, files are on the filesystem. [131077130040] |e.g. if you do something like this: [131077130050] |You will see that the 1GB of space is still shown by df, but not by Disk Usage Analyzer. [131077130060] |This is because while the file is still open, it is not actually removed from disk. [131077130070] |When the python script finishes, the filesystem will free up the space. [131077130080] |I'm not sure if the above explains your 4.4G to 9G discrepancy, though! [131077140010] |If you have large number of deleted files in your Trash, that could account for the difference what you see with Nautilus and df -h. [131077150010] |How does bash interpret the equal operator with no surrounding spaces in a conditional? [131077150020] |The following script does not behave as I would have expected. [131077150030] |Adding spaces around the '=' in the conditional made it perform how I wanted, but it got me thinking, what is it actually doing inside the conditional? [131077150040] |The output is: [131077150050] |The contents of S1 and S2 don't change from what they are assigned, so the = doesn't perform an assignment. [131077160010] |The equals operator does nothing in this case. [131077160020] |The expression $S1=$S2 evaluates to an actual string, with the values of S1 and S2 in place, effectively the string literal "foo=bar". [131077160030] |Since this string literal is not null, the statement [131077160040] |evaluates to true, and the body of the if statement is executed. [131077170010] |It's helpful to remember that [ is actually a command, usually also available as test. [131077170020] |In bash, it's a builtin, so you can see the documentation with man builtin. [131077170030] |In that documentaiton: [131077170040] |The two-argument rules are various tests, and the three-argument ones are generally comparisons. [131077170050] |When you put a space around the =, you get three arguments. [131077170060] |But when you put it all together, you get one argument, and as you can see, if that argument isn't null, it returns true. [131077180010] |Linux: Writing a watchdog to monitor multiple processes [131077180020] |A few years ago, a coworker came up with an elegant solution for a watchdog program. [131077180030] |The program ran on Windows and used Windows Event objects to monitor the process handles (PID’s) of several applications. [131077180040] |If any one of the processes terminated unexpectedly, its process handle would no longer exist and his watchdog would immediately be signaled. [131077180050] |The watchdog would then take an appropriate action to “heal” the system. [131077180060] |My question is, how would you implement such a watchdog on Linux? [131077180070] |Is there a way for a single program to monitor the PID’s of many others? [131077190010] |The traditional, portable, commonly-used way is that the parent process watches over its children. [131077190020] |The basic primitives are the wait and waitpid system calls. [131077190030] |When a child process dies, the parent process receives a SIGCHLD signal, telling it it should call wait to know which child died and its exit status. [131077190040] |The parent process can instead choose to ignore SIGCHLD and call waitpid(-1, &status, WNOHANG) at its convenience. [131077190050] |To monitor many processes, you would either spawn them all from the same parent, or invoke them all through a simple monitoring process that just calls the desired program, waits for it to terminate and reports on the termination (in shell syntax: myprogram; echo myprogram $? >>/var/run/monitor-collector-pipe). [131077190060] |If you're coming from the Windows world, note that having small programs doing one specialized task is a common design in the Unix world, the OS is designed to make processes cheap. [131077190070] |There are many process monitoring (also called supervisor) programs that can report when a process dies and optionally restart it and far more besides: Monit, Supervise, Upstart, … [131077200010] |My approach to this issue is to use init and its built-in respawn directive to start/restart whatever you need to run. [131077200020] |This was its original intent and main purpose. [131077200030] |In some cases you will need to run a script to cleanup after a process has died, or to prepare for the process to start (most of the time the work is the same). [131077200040] |In most cases a bash script that ends in exec works great for this. [131077210010] |Reroute one URL [131077210020] |Hello, [131077210030] |I am looking to reroute one particular URL to another, for example : http://website.com/page1.html to http://website.com/page2.html on a third party server only on my machine. [131077210040] |But I still want it to reply as if it was page1.html. [131077210050] |Is there a way to do that on a client UNIX ? [131077210060] |Alex PS : If any clarification is required, please tell me [131077220010] |I posted another comment yesterday, but it is not here now! [131077220020] |Anyway, it seems that a proxy is probably the way to go, despite your reservations. [131077220030] |A proxy can run on your machine, and therefore needn't be external. [131077220040] |Changing /etc/hosts to fool your browser into connecting to site2 instead of site1 just affects name resolution and is easy. [131077220050] |Getting your machine to fetch page2 instead of page1 is much harder. [131077220060] |You could probably do it this way if you do not want to configure the clients to use a proxy: [131077220070] |
  • Add website.com to /etc/hosts pointing at 127.0.0.1
  • [131077220080] |
  • Set up a reverse proxy on your machine and configure it to point to the real website.com.
  • [131077220090] |
  • Configure the proxy to fetch page2 when page1 is requested.
  • [131077230010] |Copy a File From One Zip to Another? [131077230020] |I have a file named 'sourceZip.zip' [131077230030] |This file ('sourceZip.zip') contains two files: [131077230040] |'textFile.txt' [131077230050] |'binFile.bin' [131077230060] |I also have a file named 'targetZip.zip' [131077230070] |This file ('targetZip.zip') contains one file: [131077230080] |'jpgFile.jpg' [131077230090] |In linux, what bash command shall I use to copy both files ('textFile.txt', 'binFile.bin') from the source archive ('sourceZip.zip') straight into the second archive ('targetZip.zip'), so that at the end of the process, the second archive ('targetZip.zip') will include all three files? [131077230100] |(ideally, this would be done in one command, using 'zip' or 'unzip') [131077240010] |Not sure about adding to archive, but you can always recreate one [131077240020] |First two lines unpack archives into tmp dir and the third one packages it back. [131077250010] |I'm going to say straightaway that I don't know the answer, from a practical point. [131077250020] |However, from a theoretical viewpoint, surely you'd have to unzip and re-zip the files. [131077250030] |You can't just move them in zipped-state, surely? [131077250040] |For a start, you'd have to assume that both zip files had identical settings with regard to compression mode and level. [131077250050] |You could possibly chain the unzip and zip commands together, but then that becomes a question of juggling around the commands on the command line, for which I shall defer to the *nix experts. [131077250060] |I stand very ready to be corrected. [131077260010] |Using the usual command-line zip tool, I don't think you can avoid separate extraction and update commands. [131077260020] |There are other languages with more convenient zip file manipulation libraries. [131077260030] |For example, Perl with Archive::Zip. [131077260040] |Error checking omitted. [131077260050] |Another way is to mount the zip files as directories. [131077260060] |Mounting either of the zip files is enough, you can use zip or unzip on the other side. [131077260070] |Avfs provides read-only support for many archive formats. [131077260080] |Fuse-zip provides read-write access to zip archives, so you can copy the files with cp. [131077260090] |Warning: I typed these scripts directly in my browser. [131077260100] |Use at your own risk. [131077270010] |What is the real difference between "apt-get" and "aptitude"? (How about "wajig"?) [131077270020] |I know that both apt-get and aptitude are command line package management interfaces on Debian derived Linux, with different options, but I'm still somewhat confused. [131077270030] |Under the hood, aren't they using the same APT system? [131077270040] |Why does Debian maintain these parallel tools? [131077270050] |(Bonus question: what on earth is wajig?) [131077280010] |I've often wondered myself. [131077280020] |Wikipedia suggests that aptitude properly only refers to the ncurses-based interface, which itself uses apt-get in the background. [131077280030] |The fact that you can use most apt-get command arguments with aptitude itself is just a design decision to make it easier for apt-get users to move to aptitude and vice-versa. [131077280040] |I've never used wajig, but the documentation suggests that it's just a script which knows whether you're passing it a deb file (when it runs dpkg) or an apt package name (when it runs apt-get instead). [131077280050] |Could you try it out and see if that is what it does? [131077280060] |Of course, the real difference is: [131077290010] |aptitude remembers which packages were explicitly requested and which were only installed due to dependencies. [131077290020] |It will automatically uninstall packages which were not explicitly requested when they are no longer needed. [131077290030] |apt-get treats packages requested explicitly and their dependencies the same. [131077290040] |So better use aptitude, this helps to keep your system clean. [131077300010] |http://pthree.org/2007/08/12/aptitude-vs-apt-get/ [131077310010] |As mentioned be http://pthree.org/2007/08/12/aptitude-vs-apt-get/, aptitude has a much easier to use command-line interface. [131077310020] |Under the hood, aren't they using the same APT system? [131077310030] |Yes. [131077310040] |The underlying system is not just apt, but dpkg. [131077310050] |This system is just as dumb as RPM, it can only handle the installation and administration of single packages. [131077310060] |It tracks which installed files belong to which package. [131077310070] |apt handles the downloads of repositories, tracking of dependencies, and so on for all individual packages - which it then installs using dpkg. aptitude does the same, with a different interface. [131077320010] |The most obvious difference is that aptitude provides a terminal menu interface (much like Synaptic in a terminal), whereas apt-get does not. [131077320020] |Considering only the command-line interfaces of each, they are quite similar, and for the most part, it really doesn't matter which you use. [131077320030] |Recent versions of both will track which packages were manually installed, and which were installed as dependencies (and therefore eligible for automatic removal). [131077320040] |In fact, I believe that even more recently, the two tools were updated to actually share the same database of manually vs automatically installed packages, so cases where you install something with apt-get and then aptitude wants to uninstall it are mostly a thing of the past. [131077320050] |There are a few minor differences: [131077320060] |
  • aptitude will automatically remove eligible packages, whereas apt-get requires a separate command to do so
  • [131077320070] |
  • The commands for upgrade vs. dist-upgrade have been renamed in aptitude to the probably more accurate names safe-upgrade and full-upgrade, respectively.
  • [131077320080] |
  • aptitude actually performs the functions of not just apt-get, but also some of its companion tools, such as apt-cache and apt-mark.
  • [131077320090] |
  • aptitude has a slightly different query syntax for searching (compared to apt-cache)
  • [131077320100] |
  • aptitude has the why and why-not commands to tell you which manually installed packages are preventing an action that you might want to take.
  • [131077320110] |
  • If the actions (installing, removing, updating packages) that you want to take cause conflicts, aptitude can suggest several potential resolutions. apt-get will just say "I'm sorry Dave, I can't allow you to do that."
  • [131077320120] |There are other small differences, but those are the most important ones that I can think of. [131077320130] |In short, aptitude more properly belongs in the category with Synaptic and other higher-level pacakge manager frontends. [131077320140] |It just happens to also have a command-line interface that resembles apt-get. [131077320150] |

    Bonus Round: What is wajig?

    [131077320160] |Remember how I mentioned those "companion" tools like apt-cache and apt-mark? [131077320170] |Well, there's a bunch of them, and if you use them a lot, you might not remember which ones provide which commands. wajig is one solution to that problem. [131077320180] |It is essentially a dispatcher, a wrapper around all of those tools. [131077320190] |It also applies sudo when necessary. [131077320200] |When you say wajig install foo, wajig says "Ok, install is provided by apt-get and requires admin privileges," and it runs sudo apt-get install foo. [131077320210] |When you say wajig search foo, wajig says "Ok, search is provided by apt-cache and does not require admin provileges," and it runs apt-cache search foo. [131077320220] |If you use wajig instead of apt-get, apt-mark, apt-cache and others, then you'll never have this problem: [131077320230] |If you want to know what wajig is doing behind the scenes, which tools it is using to implement a particular command, it has --simulate and --teaching modes that show you. [131077320240] |Two wajig commands that I use often are wajig list-files foo and wajig whichpkg /usr/bin/foo. [131077320250] |Edit: Of course, don't forget wajig moo. [131077330010] |apt-get, as well as the various companion tools, use significantly less memory than respective command-line invocations of aptitude, and are a bit quicker. [131077330020] |I was blissfully unaware of this until I tried upgrading the debian install on a wizened old pentium thinkpad with 32MB of ram. [131077330030] |It would take an hour or two of swap-thrashing to run apt-get; aptitude would fail after I think a longer period of time. [131077330040] |This distinction is more or less irrelevant on anything resembling a modern desktop system. [131077340010] |Remapping caps-lock to escape, and menu to compose, on the linux console [131077340020] |When running X I use a .xmodmaprc to remap certain keys thusly: [131077340030] |How can I accomplish the same things on the console? [131077340040] |

    update

    [131077340050] |In addition to the partial solution given in my answer, I've learned that the console maps CTRL-. to Compose. [131077340060] |So I may be able to get used to that. [131077340070] |Setting up the Menu key as Compose is not so easily done, as there are a ton of nul-assigned keycodes and no obvious contender for an alternate name for Menu. [131077340080] |I've also realized that the compose bindings themselves are much more limited than what I'm used to under X, and that most of the special characters I use frequently are not there. [131077340090] |Perhaps there is a utility that will translate X-syntax compose bindings into something that loadkeys can read? [131077350010] |You'll have to edit your console keymap. [131077350020] |On my console, I have mapped Escape to Caps Lock and Caps Lock to Escape. [131077350030] |Here's how it works. [131077350040] |
  • First you need to find your keymap. [131077350050] |I use the standard US layout. [131077350060] |On my system, it is located under /usr/share/keymaps/i386/qwerty/us.map.gz.
  • [131077350070] |
  • Make a copy of the file under a new name, for example us-nocaps.map.gz.
  • [131077350080] |
  • Unpack the file and open it in a text editor: gunzip us-nocaps.map.gz &&vim us-nocaps.map
  • [131077350090] |
  • Change the mappings in the file to your liking, for example I had keycode 58 = Caps_Lock which I changed to keycode 58 = Escape.
  • [131077350100] |
  • Gzip the file and load it: gzip us-nocaps.map &&loadkeys us-nocaps.
  • [131077350110] |One more thing: You'll probably have to configure your distribution somehow to make sure the new keymaps always gets loaded on boot, otherwise you'll have to load your keymap manually with loadkeys all time. [131077350120] |How you do that depends on what distribution you're using. [131077360010] |With some help from Cedric, I've discovered that Ubuntu hasn't installed the keymaps where $ man loadkeys says they are supposed to be (/usr/share/keymaps). [131077360020] |This is a brand-new install, so maybe I just need to install a package in order to get keymaps other than the one I selected during installation. [131077360030] |In lieu of pursuing that angle, and in an effort to make the solution a bit more robust and convenient, I'm doing this: [131077360040] |Although I'm a bit confused as to why Cedric's system uses Caps_Lock for caps lock and mine uses CtrlL_Lock, it is working regardless. [131077370010] |`man ascii` is misaligned when using vim as a pager [131077370020] |To use Vim as a pager for man I have [131077370030] |in my profile. [131077370040] |Now man pages look very good with colors and stuff. [131077370050] |However, when trying man ascii as a way of looking at the ASCII table, I notice that the table is mis-aligned as in the screenshot below: [131077370060] |This problem doesn't happen to the default pager. [131077370070] |When I set export MANPAGER="view -" the table is also correct, so something must be wrong with the manpager.sh script: [131077370080] |How can I fix this? [131077380010] |When I try with the following script things are normal: [131077380020] |I'm not sure what role col plays in the sequence, but it is certainly messing up the spaces. [131077380030] |Until somebody gives a better solution, this will be my fix. [131077380040] |Edit: so col was the problem because it "replaces white-space characters with tabs where possible". [131077380050] |To fix this tell col to use spaces instead of tabs with the -x option. [131077380060] |The final config is as follow (with credit to Gilles). [131077390010] |Linux installation on an extended drive related [131077390020] |I just want to install another version of same distro linux into the extended partition, i. e. into the /dev/sda7. [131077390030] |Does it make any problem into current Linux and its data and contents? [131077390040] |If no, I can dual boot into two distro's after installation right? [131077390050] |Also , how can I efficiently mount the / and /home for the new installation? [131077390060] |All suggestions are welcome [131077400010] |Having multiple Linux installations on the same disk is not a problem. [131077400020] |The installer should get everything right, though this depends on your distribution (which you don't specify). [131077400030] |With “automated” distributions such as Ubuntu, you may just need to answer one or two questions; with “hands-on” distributions such as Arch, you may need to configure a couple of things manually. [131077400040] |There's no risk of losing any data as long as you're careful not to tell the installer to overwrite your existing installation (double-check all partition numbers). [131077400050] |I recommend deleting /dev/sda7 now, that way you can just tell the installer to install in the free space. [131077400060] |Only one of the distributions will manage the bootloader. [131077400070] |It can be either the old one of the new one. [131077400080] |Older BIOSes require the bootloader to be near the beginning of the drive; I don't know the exact timeline but if your 500GB drive was sold with your computer. [131077400090] |You can share swap space between installations. [131077400100] |This should happen automatically if you tell the installer to use /dev/sda6 as swap space. [131077400110] |You can share home partitions between installations. [131077400120] |Here I'm less confident that the installer can do the right thing. [131077400130] |Make sure it doesn't reformat /dev/sda5 (if it does, it will ask for confirmation before). [131077400140] |If you can't get the installer to do what you want, add an entry for /home manually to /etc/fstab on the new installation. [131077400150] |Copy the entry from the existing installation, it should look like [131077400160] |If the installer doesn't add mount points for your existing system, add them yourself to /etc/fstab. [131077400170] |You'll probably want to do that on the existing installation anyway. [131077400180] |An entry in /etc/fstab looks like this: [131077400190] |Replace defaults by ro if you want to mount read-only, or by noauto if you don't want the filesystem to be mounted at boot time but want to be able to mount it with the command mount /media/linux2. [131077400200] |If you want both options, it's a comma-separated list: noauto,ro. [131077400210] |If your installer doesn't add fstab entries to the new installation, they should be something like [131077400220] |The two entries should be in this order. [131077400230] |You'll need to create the directories /media/linux1 and /media/linux2. [131077400240] |You only need to create /media/linux1/home if you want to be able to mount /dev/sda5 even when /dev/sda3 is not mounted. [131077410010] |What is the difference between "extended" partition and "logical" partition [131077410020] |What is the difference between "extended" partitions and "logical" partitions on my hard disk? [131077410030] |What's the need for each? [131077410040] |I am using Linux [131077420010] |Historically, hard drives have only been able to contain at most four partitions because of the originally defined format of the partition table. [131077420020] |This is not specific to operating systems. [131077420030] |You simply can't create more than four primary partitions.* [131077420040] |In order to circumvent this limit and still remain compatible with older systems, you can create an extended partition however. [131077420050] |An extended partition can contain multiple logical partitions within it. [131077420060] |This allows you to create more than four partitions in total, without having to change the format of the partition table. [131077420070] |If you're interested in the details, you can look at the Wikipedia entries on disk partitioning or the master boot record. [131077420080] |* At least on computers with BIOS firmware, I don't know if the UEFI standard (a BIOS-replacement) lifts this limitation. [131077430010] |CCD cam to USB - AV grabber for linux [131077430020] |Hi I was wondering if there is a AV Grabber that supports Linux. [131077430030] |I have IC-348 model I.C.U (international camera unit) made in P.R.C camera that uses a Sony CCD 420TVL. [131077430040] |I want to grab the video output and use it in a program that I am writing. [131077430050] |The thing is that I couldn't find a good CCD Cam to USB hardware that supports Linux. [131077430060] |Is there? [131077430070] |Can you please give me a model name? [131077430080] |Or can you tell me another way to get CCD Cam image in to a Linux laptop? [131077430090] |(And can the moderator add the following tags "ccd cam av grabber" since my whole question is about these.) [131077440010] |How can I log samba events? [131077440020] |How can I log samba events? [131077440030] |I have samba-shared directories and I want to know what exactly someone have download from it. [131077440040] |Actually, I can watch iftop to figured out WHO have downloaded, but not WHAT. [131077440050] |Any ideas? [131077450010] |According to Chapter 9 of Using Samba - Troubleshooting Samba: [131077450020] |To turn logging on and off, set the appropriate level in the [global] section of smb.conf. [131077450030] |and [131077450040] |By default, logs are placed in samba_directory /var/smbd.log and samba_directory /var/nmbd.log, where samba_directory is the location where Samba was installed (typically, /usr/local/samba). [131077450050] |You can increase the logging level to show more detailed information. [131077450060] |So just keep increasing the level until you have information that is detailed enough for your needs. [131077460010] |can ping, can't connect by IP - resetting the network fixes it briefly, then it breaks again [131077460020] |I've got one server (CentOS 5), out of a half dozen, that the network flakes out on. [131077460030] |When it's 'dead', I can still ping other servers by IP, but if I try to make any kind of connection also using an IP (ssh, telnet on 53 to test DNS) it just hangs and does nothing. [131077460040] |Running /etc/init.d/network restart makes it work again -- for a short amount of time. [131077460050] |I've compared the config against the other severs a dozen times and don't see anything different aside from the server IP. [131077460060] |Prior to today, this network card &cable have been running flawlessly. [131077460070] |I changed the way the network is laid out this morning, and this one server is giving me grief while the others all fell right in to place. [131077460080] |The pings are 100% clean, there's no errors in ifconfig. [131077460090] |I don't see any errors in any of the main logs, and I'm not really sure how else to try to diagnose the problem. [131077460100] |If anyone has an insight, or guidance on how to troubleshoot further it would be greatly appreciated. [131077470010] |Assuming this is a physical host and not a virtual machine... [131077470020] |Check your switch port packet counts. [131077470030] |When a packet is sent to the host the counter should increment. [131077470040] |At the same time use tcpdump on the host to see if it's receiving the packet at all. [131077470050] |Look for IP conflicts and arp errors. [131077470060] |Try moving the host to another switch port. [131077470070] |Try using a different physical interface on the host. [131077470080] |Try a different cable. [131077470090] |Check for link negotiation errors (especially immediately when failure occurs). [131077480010] |date - years prior to 1901 are treated as invalid [131077480020] |I have date (GNU coreutils) 7.1 installed on my system. [131077480030] |If I try to check dates prior to 14-Dec-1901, I get an "invalid date" error. [131077480040] |For example, [131077480050] |What should I do to make the date utility to treat years prior to 1901 as valid? [131077480060] |I receive similar errors for dates after 19-Jan-2038 [131077490010] |Good question. [131077490020] |The documentation says it should be allowed. [131077490030] |For numeric months, the ISO 8601 format `YEAR-MONTH-DAY' is allowed, where YEAR is any positive number, ... [131077490040] |A leading zero must be present if a number is less than ten. [131077490050] |If YEAR is 68 or smaller, then 2000 is added to it; otherwise, if YEAR is less than 100, then 1900 is added to it. [131077490060] |Are you on a 32-bit system? [131077490070] |Do you get an error with dates after 2038-01-20 as well, e.g. [131077490080] |If so, it sounds like GNU date is using a 32-bit time value. [131077490090] |I'm not sure how you can fix this other than using a 64-bit system or using a different tool, for example DateTime in Perl or datetime in Python. [131077490100] |Some background: [131077490110] |Unix times count the number of seconds from January 1 1970 using an integer value. [131077490120] |If the system uses 32-bit integers, it can only count 2.1 billion seconds forward (up to 2038-01-19 03:14:02 UTC) and 2.1 billion seconds backwards (back to 1901-12-13 20:45:52 UTC). [131077490130] |More information at: [131077490140] |
  • Unix time
  • [131077490150] |
  • Year 2038 problem
  • [131077490160] |
  • 1901 (Significance of 1901 for Modern Computers)
  • [131077500010] |Your system (or at least that version of date) is probably using a 32-bit internal time value. [131077500020] |The Unix epoch (zero time value) is 1907-01-01 00:00:00 UTC. [131077500030] |This starting point puts 1901-12-13 00:00 EST just outside the range of a signed 32-bit time value. [131077500040] |1901-12-14 00:00:00 EST is -2147454000 1901-12-13 15:45:52 EST is -2147483648 (aka INT_MIN in C, the minimum 32-bit signed integer) 1901-12-13 00:00:00 EST is -2147540400 [131077500050] |You could try using 1901-12-13 15:45:52 EST. [131077500060] |It should work, but one second earlier will probably fail in the same way as 1901-12-13 00:00. [131077510010] |How do i completely disable an account? [131077510020] |How do i completely disable an account? passwd -l will not allow anyone to log into an account using a password but you can still log in via private/public keys. [131077510030] |How would i disable the account completely? [131077510040] |As a quickfix i renamed the file to authorized_keys_lockme. [131077510050] |Is there another way? [131077520010] |To delete entirely it use userdel. [131077520020] |Please note that if you delete an account there is a risk that its user ID will still be used in the file system somewhere and a new user would inherit ownership of those files if it came in under that same user id. [131077520030] |You would want to change the owner of any files that are owned by the deleted user. [131077520040] |If you would like to add the user back later, save its lines from /etc/passwd (and on Solaris /etc/shadow) to temporary files such as /etc/passwd_deleted. [131077520050] |That way when you add it back you can use the same user id and the same password (which is encrypted in one of the above files) [131077520060] |Disclaimer: I learned UNIX on my own so I would not be surprised if there is a better way to temporarily disable the user. [131077520070] |In fact I don't even know what the private/public keys are you are talking about. [131077520080] |Also I am sure there is a find command that can be used for looking up the files with that owner userid [131077530010] |Lock the password and change the shell to /bin/nologin. [131077530020] |(Or more concisely, sudo usermod -L -s /bin/nologin username.) [131077540010] |Has anyone tried doing it via ssh_config? [131077540020] |Maybe this could help http://www.techrepublic.com/blog/opensource/set-up-user-accounts-quickly-and-securely/86 [131077540030] |Also, ensure that PasswordAuthentication is set to no as well, to force all logins to use public keys. [131077550010] |The correct way according to usermod(8) is: [131077550020] |(Actually, the argument to --expiredate can be any date before the current date in the format YYYY-MM-DD.) [131077550030] |Explanation: [131077550040] |
  • --lock locks the user's password. [131077550050] |However, login by other methods (e.g. public key) is still possible.
  • [131077550060] |
  • --expiredate YYYY-MM-DD disables the account at the specified date.
  • [131077550070] |I've tested this on my machine. [131077550080] |Neither login with password nor public key is possible after executing this command. [131077560010] |Removing broken packages [131077560020] |Recently in a bout of frustration with getting phpmyadmin setup, I decided to start from scratch. [131077560030] |Unfortunately, during the uninstall phase, I was prompted with the root password for mysql which I didn't have on hand at the time. [131077560040] |Suffice to say, it informed me that there would be residue components since it couldn't properly clean its database connectors. [131077560050] |When I arrived home, I attempted to remove the package through aptitude purge which turns out to no more potent than aptitude remove in that it saw phpmyadmin, attempted to remove it, and failed since the directories associated with the package were already removed from my earlier attempt. [131077560060] |I tried to reinstall phpmyadmin, but aptitude simply stated that there was no update available, and did nothing, if there were an update, I'd probably run into the same problems regardless. [131077560070] |In this regard, I proceeded to clean up mysql by dropping the database it used, and cleaning it from the user tables. [131077560080] |I however have no idea what else is left from the package, or even how to clean the hooks in aptitude. [131077560090] |The result of dpkg --purge [131077560100] |On following Gile's advice, I tried to re-install the dependency dbconfig-common [131077560110] |It appears that phpmyadmin cleanly cleared out dbconfig-common [131077560120] |Attempted to dpkg from archives as suggested by Giles [131077560130] |I have a webserver running on php, but I'm willing to risk downtime to get this resolved. [131077570010] |(I'm going to assume you meant aptitude purge and apt-get remove, because the commands you cited don't exist) [131077570020] |Try dpkg --purge phpmyadmin. [131077570030] |It's lower-level that the other tools, so might be more effective in this case. [131077580010] |phpmyadmin depends on dbconfig-common, which contains /usr/share/dbconfig-common/dpkg/prerm.mysql. [131077580020] |It looks like you've managed to uninstall dbconfig-common without uninstalling phpmyadmin, which shouldn't have happened (did you try to --force something?). [131077580030] |My advice is to first try aptitude reinstall dbconfig-common. [131077580040] |If it works, you should have a system in a consistent state from which you can try aptitude purge phpmyadmin again. [131077580050] |Another thing you can do is comment out the offending line in /var/lib/dpkg/info/phpmyadmin.prerm. [131077580060] |This is likely to make you able to uninstall phpmyadmin. [131077580070] |I suspect you did what that line is supposed to do when you edited those mysql tables manually, but I don't know phpmyadmin or database admin in general, so I'm only guessing. [131077580080] |The difference between remove and purge is that remove just removes the program and its data files (the stuff you could re-download), while purge first does what remove does then also removes configuration files (the stuff you might have edited locally). [131077580090] |If remove fails, so will purge. [131077590010] |How to make Vim display colors as indicated by color codes? [131077590020] |In short, I'm in an effort to replace less with vim (vimpager). [131077590030] |I have settings for scripts to spit out colors (and bold and everything nice) whenever they can. less understands the color codes and displays them nicely. [131077590040] |How can I make vim parse the codes and display colors/boldness the way less does? [131077600010] |Two answers: [131077600020] |A short one: you want to use this vim script. [131077600030] |It will conceal the actual ANSI escape sequences in your file, and use syntax highlighting to color the text appropriately. [131077600040] |The problem with using this in a pager is that you will have to make vim recognize when to use this. [131077600050] |I am not sure if you can simply always load it, or if it will conflict with other syntax files. [131077600060] |You will have to experiment with it. [131077600070] |A long answer: The best you can hope for is a partial non-portable solution. [131077600080] |Less does not actually understand the terminal escape sequences, since these are largely terminal dependent, but less can recognize (a subset of) these, and will know to pass them through to the terminal, if you use the -r (or -R) option. [131077600090] |The terminal will interprets the escape sequences and changes the attributes of the text (color, bold, underline ...). [131077600100] |Vim, being an editor rather than a pager, does not simply pass raw control characters to the terminal. [131077600110] |It needs to display them in some way, so you can actually edit them. [131077600120] |You can use other features of vim, such as concealment and syntax highlighting to hide the sequences and use them for setting colors of the text, however, it will always handle only a subset of the terminal sequences, and will probably not work on some terminals. [131077600130] |This is really just one of many issues you will run into when you try to use a text editor as a pager. [131077610010] |How to hide user status messages in XChat? [131077610020] |I'm referring to messages like these: [131077610030] |and [131077620010] |
  • Right-click the "channel" tab
  • [131077620020] |
  • Point to "settings"
  • [131077620030] |
  • Click on "hide join/part messages"
  • [131077630010] |command to layout tab separated list nicely [131077630020] |Sometimes, I'm getting as an input tab separated list, which is not quite aligned, for instance [131077630030] |Is there an easy way to render them aligned? [131077640010] |Here's a script to do it: [131077640020] |aligntabs.pl [131077640030] |usage [131077650010] |So, the answer becomes: [131077660010] |For manual tab stops: expand -t 42,48 [131077660020] |For automatic tab stops, as suggested by alex: column -t [131077660030] |(expand is on all POSIX systems. column is a BSD utility, available in many Linux distributions as well.) [131077670010] |How to enable remote login? [131077670020] |Currently I can only login to my remote linux via the "ubuntu" user account, without a password (via SSH authentication). [131077670030] |I have another user configured on the machine, and if I try to login with its username + password I get [131077670040] |Disconnected: No supported authentication methods available [131077670050] |How do I configure this user to accept user/pass authentication? [131077680010] |I suspect that your sshd is configured to allow access via public key authentication and to disallow access via password. [131077680020] |There are a couple of thiongs that you can do. [131077680030] |The better option is to generate a key-pair for the new account and to copy the public key to your remote host ~/.ssh/authorized_keys file. [131077680040] |You can use ssh-keygen or puttygen etc to generate the keys. [131077680050] |Alternatively you can enable sshd password authentication. [131077680060] |Edit the /etc/ssh/sshd_config file and ensure that the PasswordAuthentication directive is set to yes [131077680070] |Save the file and restart sshd and you should then be able to use passwords. [131077690010] |What is the meaning of $? in a shell script? [131077690020] |When going through one shell script, I saw the term "$?". [131077690030] |What is the significance of this term? [131077700010] |$? expands to the exit status of the most recently executed foreground pipeline. [131077700020] |See the Special Parameters section of the Bash manual. [131077700030] |In simpler terms, it's the exit status of the last command. [131077710010] |In addition to what cjm has told above, if the value of $? is 0, then the previous process did terminate normally (or successfully). [131077710020] |Otherwise there was some error. [131077720010] |Cjm's answer is correct, but "$?" can be used in silly ways in shell scripts, and I'd like to warn against that. [131077720020] |A lot of bad shell scripts have a repeated pattern of code: [131077720030] |If at all possible (readability concerns sometimes intrude) you should code this situation differently: [131077720040] |This latter usage is faster, does not contaminate the shell's variable namespace with what amounts to temp variables, can often be alot more readable for humans and encourages the use of "positive logic", the practice of writing conditionals without negations, which has cognitive simplicity in most situations. [131077720050] |It does away with the use of "$?" for the most part. [131077730010] |Read the stack of another process? [131077730020] |I'm trying to read a child process's stack but with no luck. [131077730030] |I know it is possible using ptrace, but ptrace's interface allows you to read only one word at a time, and I'm trying to scan a larger portions of the stack. [131077730040] |I've also tried reading the /proc/$pid/mem from the boundries of the stack as extracted from the /proc/$pid/maps file after first using ptrace to attach to it (as suggested here) but the read keeps failing (even when running as root) although the same code succeeds when tried reading from different parts of the process (e.g. heap). [131077730050] |What am I doing wrong? [131077730060] |Is there any other option? [131077740010] |ptrace's interface allows you to read only one word at a time, and I'm trying to scan a larger portions of the stack [131077740020] |Well, just use a loop, then. [131077740030] |I honestly don't see how that constitutes a problem with ptrace, I use it all the time to remotely access processes. [131077740040] |I use something like this: [131077750010] |Here is another strategy that might need tweaking but should be more efficient with large chunks of data. [131077750020] |The idea is to execute syscalls in the remote process in order to retrieve the stack contents. [131077750030] |It will need specific architecture code but if you only target x86 / x86_64 it shouldn't be too much hassle. [131077750040] |
  • Create a named pipe such as "/tmp/fifo" in your calling process.
  • [131077750050] |
  • Step into the traced process until it is returns from a syscall, using PTRACE_SYSCALL to step, waitpid() to wait and PTRACE_GETREGS/PTRACE_PEEKTEXT to check the currently executed opcode.
  • [131077750060] |
  • Backup the remote process’s registers and a small area of its stack.
  • [131077750070] |
  • Execute syscalls on the remote process by overriding its stack with your own data: open("/tmp/fifo"), write() the stack contents, close() the descriptor.
  • [131077750080] |
  • Restore the remote process’s state.
  • [131077750090] |
  • Read the fifo data from your calling process.
  • [131077750100] |There might be more elegant alternatives to the named pipe, but I can’t think of any right now. [131077750110] |The reason I only use syscalls is because remote code injection is pretty unreliable on modern systems due to various security protections. [131077750120] |The drawback is that it will hang until the remote process does a syscall (which can be a problem for some programs that mostly do computations). [131077750130] |You can see some free code implementing most of the work in this source file. [131077750140] |Feedback on the code is welcome! [131077760010] |How do open files behave on linux systems? [131077760020] |I just renamed a log file to "foo.log.old", and assumed that the application will start writing a new logfile at "foo.log". [131077760030] |I was surprised to discover that it tracked the logfile to its new name, and kept appending lines to "foo.log.old". [131077760040] |In Windows, I'm not familiar with this kind of behavior - I don't know if it's even possible to implement it. [131077760050] |How exactly is this behavior implemented in linux? [131077760060] |Where can I learn more about it? [131077770010] |To really see how this behavior is implemented you could look at some Unix programming books. [131077770020] |Mathepic is right in that it is related to an inode. [131077770030] |The actual pathname is only used to open the file, once that's done the program references it by its opened file descriptor. [131077770040] |The file descriptor in turn references the inode, which in this case doesn't care if the underlying files name has changed. [131077770050] |As far as implementing this in Windows, that's a question for another site. [131077770060] |To read more about this without hitting the books just search around for linux filesystems and inodes. [131077770070] |There might not be a clear answer, but you'll be able to understand why. [131077780010] |Programs connect to files through a number maintained by the filesystem (called an inode on traditional unix filesystems), to which the name is just a reference (and possibly not a unique reference at that). [131077780020] |So several things to be aware of: [131077780030] |
  • Moving a file using mv does not change that underling number unless you move it across filesystems (which is equivalent to using cp then rm on the original).
  • [131077780040] |
  • Because more than one name can connect to a single file (i.e. we have hard links), the data in "deleted" files doesn't go away until all references to the underling file go away.
  • [131077780050] |
  • Perhaps most important: when a program opens a file it makes a reference to it that is (for the purposes of when the data will be deleted) equivalent to a having a file name connected to it.
  • [131077780060] |This gives rise to several behaviors like: [131077780070] |
  • A program can open a file for reading, but not actually read it until after the user as rmed it at the command line, and the program will still have access to the data.
  • [131077780080] |
  • The one you encountered: mving a file does not disconnect the relationship between the file and any programs that have it open (unless you move across filesystem boundaries, in which case the program still have a version of the original to work on).
  • [131077780090] |
  • If a program has opened a file for writing, and the user rms it's last filename at the command line, the program can keep right on putting stuff into the file, but as soon as it closes there will be no more reference to that data and it will go away.
  • [131077780100] |
  • Two programs that communicate through one or more files can obtain a crude, partial security by removing the file(s) after they are finished opening. [131077780110] |(This is not actual security mind, it just transforms a gaping hole into a race condition.)
  • [131077790010] |How can I forward traffic from my publicly available server to a computer that is not publicly available? [131077790020] |My home computer is behind an ISP-level NAT which does not allow me to host game servers as a result. [131077790030] |I have a VPS which I use as a web server. [131077790040] |I want to host a game server of Minecraft, but the VPS isn't powerful enough. [131077790050] |Both my computer and my VPS are running Linux. [131077790060] |My computer can connect to my server, but the server can not open connections to my home computer. [131077790070] |What I would like to achieve is this: [131077790080] |
  • Some user connects to port 27015 on my server.
  • [131077790090] |
  • The server then forwards all traffic from port 27015 on the server to port 27015 on my home computer via some connection that I opened with my home computer.
  • [131077790100] |Can ssh do this? [131077790110] |I know it can do the reverse. [131077790120] |Is there some other program that does this if not? [131077800010] |Short answer: yes, ssh can do this. [131077800020] |The answer's in your question: "reverse" tunneling. [131077800030] |See the -R option to the ssh client: [131077800040] |-R [bind_address:]port:host:hostport Specifies that the given port on the remote (server) host is to be forwarded to the given host and port on the local side. [131077800050] |This works by allocating a socket to listen to port on the remote side, and whenever a connection is made to this port, the connection is forwarded over the secure channel, and a connection is made to host port hostport from the local machine. [131077800060] |More reading at the ever-useful How To Forge (Reverse SSH Tunneling), but the basic principle is you ssh from your private host to your public one, specifying the port to map back. [131077800070] |Remember to set your bind address in the command, otherwise it will only be bound to the local loop by default. [131077800080] |Hope this helps. [131077810010] |You could probably also do this with netcat, as an alternative. [131077810020] |http://en.wikipedia.org/wiki/Netcat#Port_Forwarding_or_Port_Mapping [131077810030] |Netcat is worth having around. [131077810040] |You can do all kinds of interesting things with it. [131077820010] |Long time Windows user starting to use Linux - what is the essential knowledge I should learn? [131077820020] |Possible Duplicate: Good Introductory resources for linux [131077820030] |I've been a Windows user since forever. [131077820040] |I touched linux here and there, but my next job will have me use linux as my desktop computer 100% of the time. [131077820050] |What are some essential tips/info I should know about? [131077830010] |K3b to split volume across mediums [131077830020] |How do i get K3b to split a volumes across several mediums? [131077830030] |For example if i give it a folder that has 6Gb (not one file of 6gb), it should create several discs of 4,4 GB automatically, instead of me doing it manually? [131077830040] |If K3B cant do it, is there any software that can? [131077840010] |How do I set up an IPv6 tunnel in Fedora? [131077840020] |I have been experimenting with IPv6. [131077840030] |Hurricane Electric through their http://tunnelbroker.net service offer a free IPv6 tunnel. [131077840040] |I would like to be able to use this reliably on my Fedora 14 desktop workstation. [131077840050] |I've tried a number of different recipes on the web to get this set up but none seem to fit a straightforward workstation install. [131077840060] |I currently use the older static network set up (through /etc/init.d/network) rather than NetworkManager on a wired Ethernet network. [131077850010] |I realize this isn't exactly the OS you are using, but for CentOS 5.5, I just created /etc/sysconfig/network-scripts/ifcfg-sit1: [131077850020] |For the remote tunnel endpoint (xx.xx.xx.xx), it can be found on the tunnelbroker.net site , and the local one (yy.yy.yy.yy) is just my internal (behind NAT) IPv4 address. [131077850030] |The tunnel does seem to take some time to come up. [131077850040] |Still looking into that, actually. [131077860010] |Best distro for USB [131077860020] |What would be the best distro for running off a USB, tried with ubuntu but it was too slow because of I/O. Would it be possible to run it from memory so it it's faster or would that make initial loading a lot slower? [131077860030] |What i need is something that i can run off a USB that boots and shuts down fast preferably with the same packages as ubuntu. [131077860040] |I'd also like to know how to make it so it doesnt ask me to install it every time, bascially install it on a usb, size is not much of an issue. [131077870010] |Knoppix is a popular choice. [131077870020] |I use Gentoo liveDVD, converted to USB stick. [131077880010] |Tinycorelinux boots incredibly fast. [131077880020] |It may not have all of ubuntu's features out of the box. [131077880030] |But it has a large set of extensions. [131077880040] |By default tinycorelinux boots entirely into RAM so everything is lightening fast. [131077880050] |Even the home folder resides in RAM. [131077880060] |If you are going to experiment with tincorelinux I reccemond installing qemu in ubuntu. [131077880070] |Then just call the tinycore-current.iso file from the command line. [131077880080] |Qemu is slow but has worked well with tinycorelinux for me. [131077890010] |switching distros will not help you much, since your problem is i/o performance like you assumed. different distro will only help if you go with a very small one, which in turn will most likely not have the packagebase of ubuntu availible. [131077890020] |you might have some success with building a custom ubuntu-based live-cd, but then again, you would have to leave out the big packages like gnome etc, so there would not be really a point in using ubuntu in the first place. [131077900010] |Choosing the file system to use with LVM [131077900020] |Since I found out about LVM I have been giving more thought to the process for choosing the file system for my future installs. [131077900030] |Usually I'd always choose whatever default option the distro would offer me for my partitions. [131077900040] |And that works fine when we are talking about just a simple desktop. [131077900050] |If you are planning to setup a machine for some kind of server, how do you evaluate your alternatives? [131077900060] |I'm planning on never installing a system without LVM again. [131077900070] |Would it limit in any way the options for file systems that I could choose from? [131077900080] |And, if you are doing LVM, would it matter for you if you choose ext2, ext3, ext4, since the maximum partition size is defined by your logical volume, not an actual physical partition? [131077910010] |LVM doesn't restrict what filesystems can be put on top of it. [131077910020] |(It doesn't know or care.) [131077910030] |If you're choosing between ext2, ext3, and ext4, the maximum partition size isn't the main concern, unless of course you need to go beyond the limits of the earlier versions, in which case the choice is mandatory. [131077920010] |One thing to consider is whether the filesystem supports resizing (growing and shrinking), since LVM lets you resize logical partitions. ext3/4 supports resizing, as does btrfs. [131077920020] |I've never tried, but the documentation says that XFS can be resized as well. [131077930010] |Mod_Auth_MySQL will not compile under Slackware 13 [131077930020] |I tried to add the auth_mod_auth (3.0.0) module to my Slackware 13 (apache 2.2 &MySQL 5.0) system. [131077930030] |I ran the usual command line for apxs (as mention in the BUILD file) [131077930040] |but I got the following error: [131077930050] |Seems like the mysql library do not match what the code expect. [131077930060] |I looked around and it seems to be the last available version of the module although is getting old (june 2005). [131077930070] |Has anyone made it work with a similar setup? [131077930080] |I've been tweaking around for half a day without any success :-( [131077940010] |I found an answer here: http://bugs.gentoo.org/show_bug.cgi?format=multiple&id=192391 [131077940020] |And it works! :-) [131077940030] |It changes APR_XtOffsetOf to APR_OFFSETOF in the .c file. [131077940040] |Patch which solves the problem: [131077950010] |Why did 'apt-get install ffmpeg' fail? [131077950020] |I dont understand how or why apt-get install failed. [131077950030] |It mentions ssmtp which is simple smtp which i use to send mail from my server to my google apps account. [131077950040] |I dont understand what is wrong or how to fix it. [131077950050] |I am using debian squeeze. [131077960010] |I'm guessing that you set your system's host name to "debian" when setting it up. [131077960020] |The installer should have added an entry for that to /etc/hosts, but it seems to not be there any longer. [131077960030] |Add an entry like this, if you don't see a line with "debian" on it already: [131077960040] |You'll have to start your text editor with sudo to be allowed to save the changes. [131077970010] |The failure is unrelated to ffmpeg, it's due to ssmtp. [131077970020] |When you ran apt-get install ffmpeg, you already had the ssmtp package in an unpacked, but not fully-installed state, which in Debian terminology is called “installed but not configured”. [131077970030] |That configuration stage mainly runs the package's postinst script, which typically registers the package's components with the appropriate service (shared libraries, menu entries, etc.). [131077970040] |When apt-get install has finished installing (i.e. unpacking) the packages named on the command line and their dependencies, it “configures” every package (the underlying dpkg command is dpkg --configure --pending). [131077970050] |Warren Young has already explained why ssmtp couldn't be configured. [131077980010] |How can I use $var in a shell brace expansion of a sequence? [131077980020] |I want to use $var in a shell brace expansion. [131077980030] |Simply putting {$var1..$var2} doesn't work, so I went "lateral"... [131077980040] |The following works, but it's a bit kludgey. [131077980050] |Is there a more "normal" way? [131077990010] |You may want to try : [131077990020] |Not sure whether this is the best answer, but it certainly is one. [131078000010] |As you already realized, {1..3} expands to 1 2 3 but {foo..bar} or {$foo..$bar} don't trigger brace expansion, and the latter is subsequently expanded to replace $foo and $bar by their values. [131078000020] |A fallback on GNU (e.g. non-embedded Linux) is the seq command. [131078000030] |Another possibility. if foo. contains no shell special character, is [131078000040] |The simplest solution is to use zsh, where rm foo.{$ext0..$extN} does what you want. [131078010010] |Default File Permissions - Ubuntu & VSFTPD [131078010020] |I have a server setup with Ubuntu, Apache2 and VSFTPD. [131078010030] |We virtual host several sites on this machine. [131078010040] |Typically, I use the same process to set up the new sites, but the default permissions for some are different than the others. [131078010050] |For some sites, I can upload new files and directories through FTP and they receive 644 and 755 permissions respectively. [131078010060] |Just how I want them by default. [131078010070] |For other sites, they receive 0 or 600 as the default new file permission. [131078010080] |I have to manually change each file to 644. [131078010090] |This is slow and a pain to do. [131078010100] |I'm by no means an expert, but I've checked the permissions and the ownership and it all seems to be the same. [131078010110] |What am I missing? [131078010120] |PS - I'm sure I left out some relevant information. [131078010130] |Please let me know what else I need to provide. [131078010140] |Thank you! [131078020010] |What you're probably looking for is the "umask" parameter. [131078020020] |If you're uploading files through vsftpd, the behaviour depends upon the way vsftpd is configured. [131078020030] |If it is set up to use system users, you must modify the "local_umask" vsftpd parameter in the configuration file from the default 077 value to 022. [131078020040] |Don't forget to uncomment the line, too, and restart the vsftpd daemon. [131078030010] |What's the purpose of the first argument to select system call? [131078030020] |From man select [131078030030] |nfds is the highest-numbered file descriptor in any of the three sets, plus 1. [131078030040] |What is the purpose of nfds, when we already have readfds, writefds and exceptfds, from which the file descriptors can be determined? [131078040010] |I don't know for sure, since I'm not one of the designers of select(), but I'd say it's a performance optimization. [131078040020] |The calling function knows how many file descriptors it put in the read, write and except FDs, so why should the kernel figure it out again? [131078040030] |Remember that in the early 80s, when select() got introduced, they didn't have multi-gigaghertz, multi-processors to work with. [131078040040] |A 25 MHz VAX was pretty doggone fast. [131078040050] |Plus, you wanted select() to work fast if it could: if some I/O was waiting for the process, why make the process wait? [131078050010] |In "Advanced Programming in the UNIX Environment", W. Richard Stevens says it is a performance optimization: [131078050020] |By specifying the highest descriptor we're interested in, the kernel can avoid going through hundred of unused bits in the three descriptor sets, looking for bits that are turned on. [131078050030] |(1st edition, page 399) [131078050040] |If you are doing any kind of UNIX systems programming, the APUE book is highly recommended. [131078050050] |UPDATE [131078050060] |An fd_set is usually able to track up to 1024 file descriptors. [131078050070] |The most efficient way to track which fds are set to 0 and which are set to 1 would be a bitset, so each fd_set would consist of 1024 bits. [131078050080] |On a 32-bit system, a long int (or "word") is 32 bits, so that means each fd_set is 1024 / 32 = 32 words. [131078050090] |If nfds is something small, such as 8 or 16, which it would be in many applications, it only needs to look inside the 1st word, which should clearly be faster than looking inside all 32. [131078050100] |(See FD_SETSIZE and __NFDBITS from /usr/include/sys/select.h for the values on your platform.) [131078050110] |UPDATE 2 [131078050120] |As to why the function signature isn't [131078050130] |My guess is it's because the code tries to keep all the arguments in registers, so the CPU can work on them faster, and if it had to track an extra 2 variables, the CPU might not have enough registers. [131078050140] |So in other words, select is exposing an implementation detail so that it can be faster. [131078050150] |
  • BSD 4.4 Lite select source code (select and selscan functions)
  • [131078050160] |
  • Linux 2.6.37 select source code (do_select and max_select_fd functions)