[131095770010] |How limited is the Linux-based distribution on the Nokia n900? [131095770020] |I am interested what N900 can do with Linux-based Maemo 5 system installed. [131095770030] |Can it for example: [131095770040] |
  • Compile c++ or java files?
  • [131095770050] |
  • Hack into wifi (I meant advanced wlan card features)
  • [131095770060] |
  • Create own wifi access point?
  • [131095770070] |
  • Be WWW server?
  • [131095770080] |
  • Use normal usb keyboard? (after changing plug into smaller)
  • [131095770090] |
  • Play flash games?
  • [131095780010] |The OS is basically a customised version of Debian, and that means that whatever Debian can do, your N900 can. [131095780020] |That's because you can run normal Debian packages on it, and even use official APT sources. [131095780030] |As for the hardware part, stuff like point 2 and point 5, I just don't know. [131095780040] |Note that the wikipedia page is quite detailed. [131095790010] |Definitely - I have a colleague who has successfully used it for wireless 'hacking' - it works really well, utilising the tools we normally use on our laptops for this. [131095790020] |as @Tshepang says, it is just Debian, cut down a wee bit, so if the default install doesn't come with what you need, apt-get is your friend. [131095790030] |Sadly, I also don't know whether it has USB master on board, but for everything else you asked - yep (although compile times could be very long - use a cross compiler :-) [131095800010] |Why is bzip2 needed in the kernel patch instructions? [131095800020] |This is from here. [131095800030] |Extract the patch [131095800040] |Test the patch [131095800050] |I looked at the .patch, the diff output of many files and the file .patch.bz2 after the bzip2 command which is too also the diff output of many files, they seem to be the same. [131095800060] |My question is why is bzip2 even needed to turn the .patch into a .patch.bz2? [131095800070] |Is it for the redirection to std output from the -dc option for the patch command? [131095800080] |Even if it is, why not just not just use the patch command in the form something like this:patch -p1 ? [131095800090] |I don't see why the bzip2 is done here. [131095800100] |Thanks! [131095800110] |Also, I think the bzip2 might have an extra space in the command after web100/, right? [131095810010] |It's unneeded. [131095810020] |Those instructions could be abbreviated to: [131095820010] |How to buld with MAKE and only --silent output on screen, but get full output to log file [131095820020] |How can I redirect output from MAKE i such a way that I get only --silent output to screen, but full MAKE-output to a log file? [131095820030] |Or can this be achieved through some sdout/stderr redirection magic? [131095830010] |or [131095830020] |Where filter is a program that passes only what you want to see. [131095830030] |Use grep or sed or something else. [131095840010] |This was answered before with a comment: [131095840020] |if you want full output to the file, then use make >file.log 2>&1 and you'ill get a "--silent" output to screen. [131095840030] |This is a very basic shell usage. [131095840040] |and 4 days have passed, i think that it no longer need to be responded. [131095850010] |how can I install flash plugin in Fedora core 3 machine? [131095850020] |Latest flash rpm and tarball has forsaken my machine with fedora core 3 OS. [131095850030] |There are two browsers currently installed in the System. [131095850040] |Firefox 2.0.x and Opera 10.11. [131095850050] |Those two browsers do not display flash content. [131095850060] |I tried to install the latest FlashPlayer plugin in the browsers. [131095850070] |But Installing RPM displayed a list of missing dependencies. [131095850080] |And even after I copied the libflashplayer.so file into the plugin path of the browsers they wont show me flash content. [131095850090] |Since I am not permitted to upgrade the OS, which would result in loss of important data, I only want the earlier version of Flash player rpm or tarball for Linux(fedora Core 3). [131095850100] |
  • From where can I find oldest FlashPlayer rpm or tarball, for Linux?
  • [131095850110] |
  • If it is not possible to install the flash plugin, then what are the other alternatives?
  • [131095850120] |Thanks. [131095860010] |1) Try Adobe binary installation. [131095860020] |(Choose Linux -> tar.gz from Adobe download page) [131095860030] |2) For a free (as in freedom) alternative to Adobe Flash, take a look at Gnu Gnash. [131095860040] |It works fine for me. [131095870010] |Fedora Core 3 reached end of life on January 16th, 2006. [131095870020] |That means there have been no security updates for over five years. [131095870030] |Half a decade! [131095870040] |This is the problem you need to solve: [131095870050] |Since I am not permitted to upgrade the OS, which would result in loss of important data, [131095870060] |What if the system were to die? [131095870070] |Or, more importantly, what if known vulnerabilities in Firefox 2.0 and the old Linux kernel you are running enable an attacker to compromise your machine and destroy your data? [131095870080] |The answer to your question is: if you really need that FC3 box unchanged, you need to sandbox that system as completely as possible. [131095870090] |It should not be talking to the public internet, and I'd be pretty suspicious about putting it near my internet network either. [131095870100] |This isn't a theoretical concern — it's a real risk. [131095870110] |And putting flash on there is only going to increase that risk. [131095870120] |So the answer really has got to be: don't use that system. [131095870130] |If you need Flash — or web browsing in general! — use a different system. [131095870140] |And that different system can (and should!) run a current, modern OS, and so running Flash will be no problem. [131095870150] |And find a way to make your data safe. [131095870160] |Because this problem is only going to get harder, not easier. [131095880010] |how to make system service out of .jar file? (linux) [131095880020] |hello! [131095880030] |i wrote a little java server which i'm running on my CentOS 5 VPS server. currently i just ssh onto the server and start the jar through commandline. [131095880040] |How an i turn this jar into a system service and set it such that it starts automatically when the system starts and also that it restarts automatically if it crashes? [131095890010] |Common solution is to use JSW, I think [131095900010] |You could use jsvc from the Apache Commons project. [131095900020] |You'll still need to write an init script utilizing jsvc, though. [131095910010] |Ubuntu 10.10 network connection cut down after few minutes of starting [131095910020] |Ubuntu 10.10 network connection works fine at the start, but few minutes later the network manager pops up a message saying disconnected you are offline and Chrome browser returns an error message like name not resolved. [131095910030] |This works fine in Windows 7. [131095910040] |It was fine too in Ubuntu, until I updated it with 300 MB worth of updates. [131095920010] |Show only stderr on screen but write both stdout and stderr to file [131095920020] |How can I user BASH magic to get this? [131095920030] |I want to only see stderr output on the screen, but I want both stdout and stderr to be written to a file. [131095920040] |Clarification: I want both stdout and stderr to end up in the same file. [131095920050] |In the order they happen. [131095920060] |Unfortunately none of the answers below does this. [131095930010] |When you use the construction: 1>stdout.log 2>&1 both stderr and stdout get redirected to the file because the stdout redirection is set up before the stderr redirection. [131095930020] |If you invert order you can get stdout redirected to a file and then copy stderr to stdout so you can pipe it to tee. [131095940010] |Let f be the command you'd like be executed, then this [131095940020] |should give you what you wish. [131095940030] |For example wget -O - www.google.de would look like this: [131095950010] |You want to duplicate the error stream so that it appears both on the console and in the log file. [131095950020] |The tool for that is tee, and all you need to do is apply it to the error stream. [131095950030] |Unfortunately, there's no standard shell construct to pipe a command's error stream into another command, so a little file descriptor rearrangement is required. [131095960010] |How can I find and replace with a new line? [131095960020] |I have a CSV delimited by commas and I want to delimit it by newlines instead. [131095960030] |Input: [131095960040] |Output: [131095960050] |I've written Java parsers that do this stuff, but couldn't this be be done with vim or some other tool? [131095960060] |sed isn't working for me: [131095970010] |If your file is delimited by ', ' (commas followed by space) then [131095970020] |sed 's/, /\n/g' filename.csv >newfile [131095970030] |will do the job. [131095970040] |If its delimited by ',' (commas without spaces) then [131095970050] |sed 's/,/\n/g' filename.csv >newfile [131095970060] |will work. [131095970070] |or change the \n to \o12 if your flavour of sed doesn't like it. [131095980010] |Similar to Iain's answer, you can also use tr: [131095980020] |Both answers assume that the CSV is simple (that is, all commas are field separators). [131095980030] |If you have something like a,"b,c",d where b,c is a single field, then things get more difficult [131095990010] |The use of \n in a s replacement text in sed is allowed, but not mandated, by POSIX. [131095990020] |GNU sed does it, but there are implementations that output \n literally. [131095990030] |You can use any POSIX-compliant awk. [131095990040] |Set the input field separator FS to a regular expression and the output field separator ORS to a string (with the usual backslash escapes). [131095990050] |The assignment $1=$ is needed to rebuild the line to use the different field separator. [131095990060] |(This assumes that your input contains plain comma-and-whitespace-separated values, without any quoting. [131095990070] |If there is quoting, you need to move to a real CSV parser in a language such as Perl or Python.) [131096000010] |Seems like the other answers achieve what you want, and a scriptable tool seems the most appropriate choice. [131096000020] |But you asked about vim, so here's how you do it there: [131096000030] |That is, replace every comma+space with a carriage return. [131096000040] |This will then be interpreted as the appropriate line ending character for the file. [131096000050] |(You can check this by searching for \r -- it won't be found). [131096010010] |What commands can I use to enable / disable Apache2 modules? [131096010020] |What are the terminal commands used to enable and disable Apache2 modules? [131096010030] |Update: Since the commands appear to differ based on distribution, I have made this question a community wiki where hopefully each poster can indicate the commands they use along with the pertinent distributions in which they work. [131096020010] |On debian / ubuntu you need to look at a2enmod and a2dismod. [131096020020] |There are similar tools for toggling site configurations too (a2ensite and a2dissite). [131096030010] |Server/Desktop Ubuntu [131096030020] |What's the difference between the server version of Ubuntu and the desktop version? [131096040010] |Why should I use Debian 6 with FreeBSD kernel? [131096040020] |Debian 6 will also be available with the FreeBSD kernel. [131096040030] |Why did they decide to to that and why should I use it? [131096050010] |I think the most compelling reason would be to run ZFS under a familiar GNU/Linux userspace. [131096060010] |Debian kFreeBSD is officially considered a technical preview right now. [131096060020] |This means it works but is not completely ready for production use. [131096060030] |If you just want a usable system stick with Debian Linux for now. [131096060040] |Once it graduates from technical preview status, you may want to reexamine it if you have needs that are better fulfilled by BSD than Linux, such as ZFS and the OpenBSD Packet Filter (pf). [131096070010] |Debian does not target a specific kernel. [131096070020] |Debian GNU/Linux is just one variant (the most popular and advanced). [131096070030] |There are also Debian GNU/NetBSD, Debian GNU/Hurd, Debian GNU/Darwin, and as you mentioned Debian GNU/kFreeBSD (and perhaps more). [131096070040] |Porting Debian to non-Linux kernels is useful for people (users, system administrators, system developers, etc) who are using/developing a non-Linux kernel but would like to take advantage of the Debian (dpkg, apt, aptitude, debconf, the policy) and GNU (coreutils, autotools, bash, gcc, gdb, etc) tools. [131096080010] |After cloning Fedora 14 install to another machine, onboard NIC is seen as eth1 instead of eth0. Why? [131096080020] |I have the following procedure for replicating a Fedora workstation setup. [131096080030] |
  • Boot from a Live CD, make tgz's of the filesystems.
  • [131096080040] |
  • Go to new machine, make filesystems, dump the tgz's in the proper places.
  • [131096080050] |
  • Adjust UUID's in /etc/fstab and /boot/grub/menu.lst
  • [131096080060] |
  • Run grub-install
  • [131096080070] |
  • Reboot!
  • [131096080080] |The nice thing is that DHCP assigns the new machine an unique name, and users have /home mounted on the server. [131096080090] |Graphics configuration aren't a worry either, since recent versions of Xorg are wicked smart in auto-detecting graphic adapters. [131096080100] |So everything works like a snap... with the exception of one small quirk: [131096080110] |In the first boot of the new machine, network startup fails. [131096080120] |It turns out the machine thinks there's no such thing as an eth0, but there is an eth1 and it is the machine's onboard ethernet. [131096080130] |So I have to go to /etc/sysconfig/network-scripts, rename ifcfg-eth0 to ifcfg-eth1, and edit the DEVICE= line in it. [131096080140] |Then I reboot and everything works. [131096080150] |I believe somewhere, in some file, there is information associating eth0 with the MAC of the "Master Mold" machine's eth0. [131096080160] |But where? [131096080170] |P.S.: I don't use NetworkManager. [131096090010] |On my machine it is [131096090020] |/etc/udev/rules.d/70-persistent-net.rules [131096090030] |This is a Debian squeeze machine, but it is probably similar for other Linux distributions. [131096090040] |Mine looks like [131096090050] |Tip: doing [131096090060] |will give you the answer in a couple of minutes, probably. [131096090070] |That is what I did. [131096100010] |How to configure .inputrc so ALT+UP has the effect of cd .. [131096100020] |It should be possible to do that by having ALT+UP generate consecutive "keyboard input" equivalent to 'c', 'd', ' ', '.', '.', ENTER by means of a macro definition. [131096100030] |But can't figure out how exactly to do it. [131096110010] |To do literally what you're asking, put the following line in your ~/.inputrc: [131096110020] |Here \e\e[A is byte sequence that your terminal sends when you press Alt+Up (\e is parsed as the escape character), some terminals might send \e[1;3A~ or some other sequence instead. [131096110030] |To find out what sequence your terminal sends, run cat and press the key (escape will display as ^[). [131096110040] |In bash, you can in principle bind a key to shell code, so you should be able to do this: [131096110050] |However this doesn't work due to a hard-to-fix implementation bug. [131096110060] |Zsh expert Stéphane Chazelas has a workaround: [131096110070] |The effect is somewhat confusing because the prompt isn't redrawn. [131096110080] |In bash ≥4, add shopt -s autocd to your ~/.bashrc. [131096110090] |Then you can change to the parent directory (or any directory) by entering just .., without having to type the cd command. [131096120010] |Create tar archive of a directory, except for hidden files? [131096120020] |Here's a newb question. [131096120030] |I'm wanting to create a tar archive of a specific directory (with its subdirectories of course). [131096120040] |But when I do it, using the tar command, I get a list of files that were included, for example: [131096120050] |a calendar_final/._style.css [131096120060] |a calendar_final/style.css [131096120070] |As you can see, there are two versions of the same file. [131096120080] |This goes for every file, and there are many. [131096120090] |How do I exclude the temporary files, with the ._ prefix, from the tar archive? [131096130010] |This should work: [131096140010] |Frederik Deweerdt has given a solution that works on GNU tar (used on Linux, Cygwin, FreeBSD, OSX, possibly others), but not on other systems such as NetBSD, OpenBSD or Solaris. [131096140020] |POSIX doesn't specify the tar command (because it varies too wildly between unix variants) and introduces the pax command instead. [131096140030] |The option -w means to produce an archive (-r extracts), and -x selects the archive format. [131096140040] |The option -s '!BRE!!' excludes all files whose path matches the basic regular expression BRE. [131096150010] |You posted in a comment that you are working on a Mac OS X system. [131096150020] |This is an important clue to the purpose of these ._* files. [131096150030] |These ._* archive entries are chunks of AppleDouble data that contain the extra information associated with the corresponding file (the one without the ._ prefix). [131096150040] |They are generated by the Mac OS X–specific copyfile(3) family of functions. [131096150050] |The AppleDouble blobs store access control data (ACLs) and extended attributes (commonly, Finder flags and “resource forks”, but xattrs can be used to store any kind of data). [131096150060] |The system-supplied Mac OS X archive tools (bsdtar (also symlinked as tar), gnutar, and pax) will generate a ._* archive member for any file that has any extended information associated with it; in “unarchive” mode, they will also decode those archive members and apply the resulting extended information to the associated file. [131096150070] |This creates a “full fidelity” archive for use on Mac OS X systems by preserving and later extracting all the information that the HFS+ filesystem can store. [131096150080] |The corresponding archive tools on other systems do not know to give special handling to these ._* files, so they are unpacked as normal files. [131096150090] |Since such files are fairly useless on other systems, they are often seen as “junk files”. [131096150100] |Correspondingly, if a non–Mac OS X system generates an archive that includes normal files that start with ._, the Mac OS X unarchiving tools will try to decode those files as extended information. [131096150110] |There is, however an undocumented(?) way to make the system-supplied Mac OS X archivers behave like they do on other Unixy systems: the COPYFILE_DISABLE environment variable. [131096150120] |Setting this variable (to any value, even the empty string), will prevent the archivers from generating ._* archive members to represent any extended information associated with the archived files. [131096150130] |Its presence will also prevent the archivers from trying to interpret such archive members as extended information. [131096150140] |You might set this variable in your shell’s initialization file if you want to work this way more often than not. [131096150150] |Then, when you need to re-enable the feature (to preserve/restore the extended information), you can “unset” the variable for individual commands: [131096150160] |The archivers on Mac OS X 10.4 also do something similar, though they use a different environment variable: COPY_EXTENDED_ATTRIBUTES_DISABLE [131096160010] |How can I generate email statistics from mutt header cache? [131096160020] |When configured accordingly (set header_cache=) mutt saves the mail headers in a cache file. [131096160030] |That could be used to generate mail statistics. [131096160040] |Does anybody know something about the file format? [131096160050] |Are there any tools available to extract the information contained? [131096160060] |(Besides strings, grep, awk and the like) [131096170010] |Is is possible to use KDE as well as Gnome on a machine? [131096170020] |Is it possible to install KDE and Gnome on your machine (using Fedora) in such a way that when you boot, you can specify whether you want to use KDE or Gnome. [131096170030] |Even better would be if you can switch between the two without having to reboot. [131096170040] |I think it should be possible. [131096170050] |How can I do this? [131096180010] |It's perfectly possible. [131096180020] |Just install both KDE and Gnome using your package manager. [131096180030] |You will then be able to choose which desktop you want in the login screen using the "Sessions" menu. [131096180040] |You'll be able to switch between the desktops by logging out and then choosing the other one in the login screen. [131096180050] |So no reboot is required. [131096190010] |You can install as many desktop environments as you want, switch between them at the login screen. [131096200010] |Terminal charset / font [131096200020] |Hi, [131096200030] |I want to write a game which runs in a terminal. [131096200040] |I do some terminal coloring and wanted to use some unicode characters for nice ascii art "graphics". [131096200050] |But a lot of unicode characters aren't supported in the linux terminal (the non-X terminal, I don't know how you call it... [131096200060] |VT100? [131096200070] |I mean the terminal which uses the text mode for output, no graphic mode, so the same font as in bios is used to display the text.) [131096200080] |For example, I wanted to draw half character "pixels" using the "half block" characters U+2580 (▀) and U+2584 (▄) but these are not supported in the terminal. [131096200090] |(These are only examples - I want to use a lot more special characters...) [131096200100] |Which characters does this font support? [131096200110] |Is there any document or table listing these characters? [131096200120] |Is this device-dependent or is there any "standard"? [131096200130] |Thanks in advance! [131096200140] |Leemes [131096210010] |That terminal is called the Linux console, or sometimes a “vt” (short for virtual terminal). [131096210020] |The terminology can be confusing, especially since it's used inconsistently and sometimes incorrectly. [131096210030] |You can find more information on terminology by reading What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?. [131096210040] |The Linux console supports user-configured fonts, so the answer to your question is “whatever the user set up”. [131096210050] |The utility to change the font is consolechars, part of the Linux console tools. [131096210060] |Only 8-bit fonts are supported by the hardware, though you can partly work around this by supporting unicode-encoded output but only having 256 glyphs (other characters are ignored). [131096210070] |Read the lct documentation (online as of this writing, it should be included in your distribution's package) for more information. [131096210080] |If you use the Linux framebuffer, you can have proper unicode support, either directly or through fbterm. [131096210090] |The half-block characters are included in IBM code page 437, which is supported in ROM most PC video adapters. [131096210100] |Depending on what characters you need, this may be enough. [131096210110] |Note that very few people use the Linux console these days. [131096210120] |Some people cannot use it for various reasons (not running Linux, running on a remote X terminal, having a video adapter where text mode is buggy, …). [131096210130] |I don't recommend spending much energy on supporting it. [131096220010] |Why are UNIX logins often formed with the first letter of the first name followed by the first seven letters of the name? [131096220020] |I have seen in many places (especially universities) logins formed in the following way: [131096220030] |
  • First letter of the first name: John Smithsonian → j
  • [131096220040] |
  • First seven letters of the name: John Smithsonian → smithso
  • [131096220050] |Thus the login would be jsmithso. [131096220060] |Does anyone know how such a way to create a login is called? [131096220070] |And why it is done that way. [131096230010] |Some time ago, most systems had an 8-character limit for user names. [131096230020] |Given the limited space, first name plus last name is often too long, leading to various conventions such as 1+7 (or 1+1+6 for people who have two first names or a middle name). [131096230030] |This isn't the only convention; common conventions include first name only, last name only, initials only (mostly in the US, rarer in countries where people only have one or two initials), first name initial plus last name (in either order), first name plus last name initial (in either order), user-chosen nicknames, any of the above truncated to 8 characters (sometimes a different number), or something else altogether. [131096240010] |Have seen as many this way: [131096240020] |lastname+as many latters of firstname that fit into 8 chars as well. [131096240030] |It's just a way to fit into what available space you have. [131096250010] |`which`, but all [131096250020] |I think most are familiar with the which command, and I use it frequently. [131096250030] |I just ran into a situation where I'm curious not just which command is first in my path, but how many and where all the commands in all my paths are. [131096250040] |I tried the which man page (typing man which made me laugh), but didn't see anything. [131096260010] |The --all or -a flag will show you all matches in your path, and aliases (at least on Fedora, Ubuntu and CentOS): [131096260020] |On AIX and Solaris, this will get you close: [131096270010] |On some systems, which -a shows all matches. [131096270020] |If your shell is bash or zsh¹, you can use type instead: type foo shows the first match and type -a foo shows all matches. [131096270030] |The three commands type, which and whence do mostly the same thing; they differ between shells and operating systems in availability, options, and what exactly they report. type is always available and shows all possible command-like names (aliases, keywords, shell built-ins, functions, and external commands). [131096270040] |The only fully portable way to display all matches is to parse $PATH yourself. [131096270050] |Here's a shell script that does this. [131096270060] |If you make it a shell function, make sure to enclose the function body in parentheses (so that the change to IFS and set -f don't escape the function), and change exit to return. [131096270070] |¹ Or ksh 93, according to the documentation, though ksh 93s+ 2008-01-31 only prints the first match when I try. [131096280010] |ksh and zsh have "whence" as a shell built-in. whence -a does what you want under zsh: [131096280020] |I have to clean up PATH in zsh, I have lots of duplicates in it. whence -a works differently under ksh: [131096280030] |I have to say, that seems like a potentially useful behavior, too. [131096290010] |If you don't have a which supporting -a, or whence available, roll your own: [131096300010] |How to find out what color I gave to xsetroot? [131096300020] |I used xsetroot -solid "#xxxxxx" to set a background color. [131096300030] |I like this color, but did not record the command and it's long gone from the bash shell history. [131096300040] |How can I find out what the color was? [131096310010] |I would just use the Color Picker tool from Gimp, which will let you click anywhere on the screen and will give you the RGB value for the color at that point. [131096320010] |Assuming it still is the color of the root window: run xcolorsel (part of the contributed X utility set; some distributions pack it separately), click the “Grab color” button, and click somewhere on your root window. [131096320020] |The numbers you want are the ones below the color list box. [131096320030] |Change the display format to “8 bit scalred rgb” to have something familiar. [131096330010] |Which shell should I use - tcsh vs bash? [131096330020] |I have been using tcsh for a long time now. [131096330030] |But whenever I am searching for something, I often find that the methods specified are bash specific. [131096330040] |Even the syntax for the shell scripts is different for the two. [131096330050] |From what I have experienced searching and learning on the internet, bash seems to be the more common shell used. [131096330060] |Even the number of questions on this site tagged bash are way more (five more currently) than the number of questions tagged tcsh. [131096330070] |So, I am wondering whether I should switch to bash. [131096330080] |What do you think? [131096330090] |Why should I stick to tcsh OR why should I move over to bash? [131096340010] |Actually, you should move over to zsh! [131096340020] |It's essentially backward-compatible and it's got all the utility of bash with a big huge helping of awesomeness. [131096340030] |There's even a book (although I haven't gotten around to reading it). [131096340040] |I will admit to one reason to not use zsh over bash but unless you're constantly working at multiple new installations it doesn't really apply. [131096340050] |Especially if you know about the most significant differences. [131096340060] |This however is really akin to the differences between vim and vi; you don't want to be naive but practically it's no big deal. [131096340070] |Even hardcore vim users can find their way around emacs and vice-versa. [131096350010] |The compatibility obsessed are missing the point, I think. [131096350020] |If you try to make everything compatible by putting #!/bin/sh at the top and using .sh extensions, but never test on anything but bash, you haven't guaranteed a compatible script! [131096350030] |Better to just use #!/bin/bash and .bash so that users actually know the real requirement. [131096350040] |If you know zsh, tcsh, or something else better than bash, and have a good reference manual for it, don't hold back. [131096350050] |Just like people expect to install perl or python to be able to run some scripts, they can handle installing your obscure shell, too! :D [131096360010] |zsh probably has a few more similarities to tcsh than bash does. [131096360020] |See: http://zsh.sourceforge.net/FAQ/zshfaq02.html#l13 [131096360030] |People often claim that zsh can do things bash can't, but I have not found that to be the case. [131096360040] |What I have seen is that for zsh it is easier, built in or turned on by default, while in bash it is an addon script, has to be turned on, or is more difficult. [131096360050] |(disclaimer: I am a bash user that has sometimes considered switching to zsh) [131096370010] |After learning bash I find that tcsh is a bit of a step backwards. [131096370020] |For instance what I could easily do in bash I'm finding it difficult to do in tcsh. [131096370030] |My question on tcsh. [131096370040] |The Internet support and documentation is also much better for bash and very limited for tcsh. [131096370050] |The number of O'Reilly books on bash are great but I have found nothing similar for tcsh. [131096380010] |You should switch to a POSIX compliant shell http://pubs.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html , like one of bash, ksh, dash, but not zsh and certainly not tcsh. [131096380020] |It has been a long time since csh was declared a poor choice for scripting: http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/ , tcsh isn't that much different in that area. [131096380030] |When writing scripts, make sure to use POSIX only constructions (i.e. avoid bashisms and the likes) if you don't want to be locked again in something non portable. [131096390010] |Which shell? [131096390020] |Go for the one with the best "ease-of-use" vs "hassles" ratio... [131096390030] |If you can't find enough general examples and explanatons for your "Maserati" shell, then it's extra performance may be more of a problem than a bonus... [131096390040] |I found this article/site interesting; it may be worth a read: UNIX shell differences and how to change your shell [131096400010] |LVM is a collection of user-space utilities and kernel modules that provide layers of abstraction to the storage available on Unix systems. [131096400020] |Among its many notable features are the ability to: [131096400030] |
  • combine many disks into one storage pool
  • [131096400040] |
  • stripe volumes across different physical storage devices
  • [131096400050] |
  • mirror volumes
  • [131096400060] |
  • snapshot volumes
  • [131096400070] |
  • enlarge or shrink volumes, often without system downtime
  • [131096410010] |Logical Volume Management provides storage abstraction, a generalization of disk partitioning [131096420010] |How to connect two Linux computers with bluetooth? [131096420020] |I have 2 computers, both have Ubuntu 10.10 desktop edition installed. [131096420030] |Both have bluetooth dangle, but the manufacturers are different. [131096420040] |I want to connect the two computers and share the Internet connection. [131096420050] |On computer A, I set the bluetooth to 'visible'; and on computer B, I started the 'setup new device' process. [131096420060] |On computer B, it can find A, but when it pops up a dialog and ask me to enter the PIN code on A, I cannot see anywhere on A that I can enter the code. [131096420070] |I also tried from the other direction, but no luck. [131096430010] |I have not tried it, but here's a link for bluetooth networking with linux, or this tutorial (both found via a google search for bluetooth networking linux). [131096440010] |How to associate the Fedora start menu with the Windows keyboard? [131096440020] |I have a Windows keyboard. [131096440030] |How to associate the start menu with the Windows key on the keyboard? [131096450010] |It looks like you're using KDE so try: [131096450020] |and change the shortcut key in the Popup Launch Menu entry from Alt-F1 to the Win key. [131096460010] |In KDE 4, the sequence to find the shortcut key is [131096460020] |System Settings -> Keyboard and Mouse -> Global Keyboard Shortcuts -> Plasma Workspace [131096460030] |then modify the shortcut key for "Activate Application Launcher Widget". [131096460040] |An alternative way to get to the shortcut is to simply right-click on the Kickoff menu and choose "Application Launcher Settings". [131096460050] |Additionally, unless you configure the Win key, it acts as a Meta modifier by default, and cannot be used as a shortcut. [131096460060] |To change this, create a file ~/.xmodmap and put the following in the file: [131096460070] |keycode 115 = F14 [131096460080] |This makes your system think the Win key is actually the F14 key. [131096460090] |Note: different keyboards sometimes have different key maps for the Win key (i.e. on my Thinkpad, the keycode for the Win key is 133). [131096460100] |You can use the application "xev" from the terminal to determine which keycode applies to your keyboard. [131096460110] |You can modify the file [131096460120] |/etc/kde/kdm/Xsession [131096460130] |and add the command [131096460140] |if [ -f $HOME/.xmodmap ]; then /usr/bin/xmodmap $HOME/.xmodmap fi [131096460150] |to the bottom of the file, which should load your .Xmodmap on startup. [131096460160] |
  • Note: you should do the .xmodmap creation and the Xsession modification and then logout/login (or run "/usr/bin/xmodmap $HOME/.xmodmap") before attempting to modify the shortcut key.
  • [131096470010] |Can new glibc versions be used with an old kernel? [131096470020] |Apparently glibc can be compiled with --enable-kernel to support older kernel versions. [131096470030] |However, I haven't been able to find information on what kernel versions are supported on a particular glibc version. [131096470040] |Ideally, I want to see a compatibility matrix for every glibc (2.x) and kernel (2.6.x) releases. [131096470050] |Does it exist? [131096470060] |If it doesn't exist, perhaps it's because every glibc versions (in the last 5 years or so) supports all kernels since 2.6.0? [131096470070] |(Arch = x86-64, if that matters) [131096480010] |According to the project's README you can use glibc with 2.4 kernels, but you will lose some functionality: [131096480020] |When working with Linux kernels, the GNU C Library version 2.4 is intended primarily for use with Linux kernel version 2.6.0 and later. [131096480030] |We only support using the NPTL implementation of pthreads, which is now the default configuration. [131096480040] |Most of the C library will continue to work on older Linux kernels and many programs will not require a 2.6 kernel to run correctly. [131096480050] |However, pthreads and related functionality will not work at all on old kernels and we do not recommend using glibc 2.4 with any Linux kernel prior to 2.6. [131096480060] |Note that --enable-kernel=VERSION means: [131096480070] |compile for compatibility with kernel not older than VERSION [131096490010] |Is it possible to source a file in bash, but skipping specific functions? [131096490020] |Suppose I have bash_functions.sh: [131096490030] |And in my ~/.bashrc I do: [131096490040] |Is it possible to, when sourcing it, avoid sourcing a specific function? [131096490050] |I mean, source everything in bash_functions.sh, except for test? [131096500010] |That's not possible. [131096500020] |You can divide bash_functions.sh into 2 file, one include what you want to source, and another include the functions you don't want to source. and use a 3rd script file to combine them for normal usage. [131096510010] |Not exactly, however, you can override the test() function. [131096510020] |The last function definition always takes precedence. [131096510030] |So if you source a file that has test() then define a function with the same name after that, the latter function will override the one that was sourced. [131096510040] |I take advantage of this to provide some object-orientedness in some of my scripts. [131096510050] |Example: [131096510060] |bash_functions.sh: [131096510070] |scripty_scripterson.sh [131096510080] |At the command line: [131096520010] |In a function definition foo () { … }, if foo is an alias, it is expanded. [131096520020] |This can sometimes be a problem, but here it helps. [131096520030] |Alias foo to some other name before sourcing the file, and you'll be defining a different function. [131096520040] |In bash, alias expansion is off by default in non-interactive shells, so you need to turn it on with shopt -s expand_aliases. [131096520050] |If sourced.sh contains [131096520060] |then you use it this way [131096520070] |then you get old foo. [131096520080] |Note that the sourced file must use the foo () { … } function definition syntax, not function foo { … }, because the function keyword would block alias expansion. [131096530010] |You may create a temp-file, read it in, and delete it afterwards. [131096530020] |To delete function 'test', I assume here that there is no '}' inside the function. [131096540010] |Different sleep binaries on Mac (Darwin) and in Linux. How to properly handle the differences? [131096540020] |In my Linux box, sleep accepts seconds, minutes and hours. [131096540030] |So: [131096540040] |Sleeps for 10 minutes (or 600s). [131096540050] |sleep on Mac only accepts seconds as argument. sleep 10m doesn't work, only sleep 600s. [131096540060] |What can I do? [131096540070] |Create a function named sleep that converts when 10m or 10h is passed as a parameter to seconds, and calls the builtin sleep? [131096550010] |sleep on Linux accepts seconds too (at least all the versions I've ever seen); can't you just use sleep 600 on both? [131096560010] |You could use homebrew for Mac OS X: https://github.com/mxcl/homebrew and install the coreutils package from there. [131096560020] |That will allow you to install the GNU version of sleep that handles the same parameters as the linux version. [131096560030] |Note that by default it installs the binaries with a 'g' prefix, so the command will actually be named gsleep, but the package provides a script file to alias all commands. [131096570010] |What is the proper way to manage multiple python versions? [131096570020] |I have a machine with Python 2.6 installed as a the default. [131096570030] |Then, I installed Python 2.7, and manually created /usr/bin/python as a symlink to the new installation. [131096570040] |Then, I was running into problems with command-not-found. [131096570050] |I'm trying to reinstall it: [131096570060] |and I get this error: [131096570070] |/usr/bin/python does not match the python default version. [131096570080] |It must be reset to point to python2.6 [131096570090] |But I really want python 2.7 to be the default. [131096570100] |How do I fix this mess? [131096580010] |How do I fix this mess? [131096580020] |Nothing more than reinstalling python2.7. [131096580030] |It will undo your change (the symlink). [131096580040] |Why do want it as default? [131096580050] |Each time you need it, just use python2.7 or include #!/usr/bin/python2.7 (the shebang) at the beginning of your scripts. [131096580060] |If you insist on having python2.7 as system-wide default, use a later release of Ubuntu (currently it's Ubuntu 11.04, codenamed Natty). [131096580070] |It uses it as default, instead of 2.6. [131096580080] |In future, avoid doing manual interventions like what you did with the symlink thing. [131096580090] |This is especially for distro-managed files, and most especially for complex beasts like Python installations. [131096590010] |What is the need for 'fakeroot' command in linux. [131096590020] |Why do we need fakeroot command at all? [131096590030] |Can't we simply use the sudo or su commands? [131096590040] |The man page says: [131096590050] |fakeroot - run a command in an environment faking root privileges for file manipulation [131096590060] |About.com says: [131096590070] |Gives a fake root environment. [131096590080] |This package is intended to enable something like: dpkg-buildpackage -rfakeroot i.e. to remove the need to become root for a package build. [131096590090] |This is done by setting LD_PRELOAD to libfakeroot.so, which provides wrappers around getuid, chown, chmod, mknod, stat, ..., thereby creating a fake root environment. [131096590100] |If you don't understand any of this, you do not need fakeroot! [131096590110] |My question is, what special purpose does it solve that a simple su or sudo don't? [131096590120] |For example, for repacking all installed packages in ubuntu we give following command: [131096590130] |Can we do the above command with sudo or su instead of fakeroot like this: [131096590140] |EDIT: [131096590150] |Running: [131096590160] |gives me this error: [131096590170] |control directory has bad permissions 700 (must be >=0755 and <=0775) [131096590180] |Any reason why? [131096600010] |I've used it for package building scripts. [131096600020] |I was not sure that the person running the script has root level access, but the script still needed to generate, say, a tar file which contained files that belong to root. [131096600030] |The simplest way to do it was run the package building script under fakeroot, which tricked the archiver into believing that the files belong to root, and packed them up as such inside the archive. [131096600040] |This way, when the package was unpacked to the destination machine (on a different machine altogether), the files didn't belong to weird or non-existing users. [131096600050] |Thinking about it, the only place I've seen this was for building some kind of archive: rootfs of embedded systems, tar.gz archives, rpm packages, .deb packages, etc. [131096610010] |AFAIK, fakeroot runs a command in an environment wherein it appears to have root privileges for file manipulation. [131096610020] |This is useful for allowing users to create archives (tar, ar, .deb etc.) with files in them with root permissions/ownership. [131096610030] |Without fakeroot one would need to have root privileges to create the constituent files of the archives with the correct permissions and ownership, and then pack them up, or one would have to construct the archives directly, without using the archiver. [131096610040] |fakeroot works by replacing the file manipulation library functions (chmod(), stat() etc.) by ones that simulate the effect the real library functions would have had, had the user really been root. [131096610050] |Synopsis : [131096610060] |Check more here : fakeroot [131096620010] |Imagine that you are a developer/package maintainer, etc. working on a remote server. [131096620020] |You want to update the contents of a package and rebuild it, download and customize a kernel from kernel.org and build it, etc. [131096620030] |While trying to do those things, you'll find out that some steps require you to have root rights (UID and GID 0) for different reasons (security, overlooked permissions, etc). [131096620040] |But it is not possible to get root rights, since you are working on a remote machine (and many other users have the same problem with you). [131096620050] |This is what exactly fakeroot does: it pretends an effective UID and GID of 0 to the environment which requires them. [131096620060] |In practice you never get real root privileges (in opposite to su and sudo that you mention). [131096630010] |I am failing to build gudev with JHBuild [131096630020] |When I run jhbuild buildone gudev I get: [131096630030] |NOTES: [131096630040] |
  • I verified that I do have /opt/gnome/lib/libpython2.5.so.1.0.
  • [131096630050] |
  • At time of writing, I'm running latest JHBuild.
  • [131096630060] |
  • I used jhbuild bootstrap --ignore-system to avoid any incompatibilities that may arise from my Debian packages. [131096630070] |Note that the Python 2.5 so file is built and installed by this command.
  • [131096640010] |You've installed a shared library in a non-standard location, so it's not found. [131096640020] |If you want the libraries in /opt/gnome/lib to be available automatically to all programs, add this directory to /etc/ld.so.conf, then run ldconfig (as root). [131096640030] |If /etc/ld.so.conf contains a line like include /etc/ld.so.conf.d/*.conf, rather than add your entry directly to /etc/ld.so.conf, create a file /etc/ld.so.conf.d/tshepang.conf and add /opt/gnome/lib to that file. [131096640040] |If you only want the libraries in /opt/gnome/lib to be available on request, or don't have root permissions, add that directory to the LD_LIBRARY_PATH environment variable. [131096640050] |(It's a colon-separated list, just like PATH, but for libraries instead of executables.) [131096640060] |A third possibility is to tell the /opt/gnome/bin/python binary to look for libraries in /opt/gnome/lib, but you have to do that when you build the executable. [131096640070] |Check the JHBuild documentation for a setting like “rpath” or “runtime library path”. [131096650010] |Free/Open Source secure Skype alternative on Fedora & OpenBSD? [131096650020] |Criteria: [131096650030] |
  • Audio/video calls
  • [131096650040] |
  • Encrypt the whole traffic [with a good encryption]
  • [131096650050] |
  • ported to windows 7 too
  • [131096650060] |
  • runs on Fedora, OpenBSD
  • [131096650070] |Does anybody know a good alternative? [131096660010] |There's Speak Freely &a Windows only version, but development was halted many years ago (Windows7 did not exist, but there was a Windows and a Linux version). [131096660020] |So if you fancy picking it up where it was left, that could be an option. [131096670010] |Well, there are Ekiga and its various cousins eg. [131096670020] |Twinkle, which support the SIP standard. [131096670030] |Unfortunately my experience is that they do not work as reliably as Skype. [131096670040] |In particular, Ekiga seems to get upset by Flash. [131096670050] |That is understandable. [131096670060] |I also find Flash quite upsetting. [131096670070] |If you can get Ekiga to work, its rates via Diamondcard.us are a lot cheaper than Skype, particularly for SMS, if you use that. [131096670080] |The cost of an SMS for the locations I checked is around a third of Skypes. [131096670090] |The difference for regular calls is less dramatic but still significant. [131096670100] |And it is free (as in freedom) software, and seems to be quite cross-platform. [131096670110] |I think Ekiga does not currently support encryption, so that would violate one of your criteria. [131096680010] |There isn't any yet :( that's the correct answer. [131096680020] |But Thank you! [131096690010] |Privoxy redirect rule for Wikipedia [131096690020] |I have a few Privoxy rules, that can redirect HTTP Wikipedia [en,de] traffic to HTTPS: (a little part from the "user.action" file) [131096690030] |So you get the problem: is there any way to put a "regexp" or something to: "en", "de"? [131096690040] |There are hundreds of other languages, i think it's a bad solution to write down them all.< [131096690050] |There is another question: is my solution good so far? [131096690060] |I'm asking that because if i visit https://secure.wikimedia.org/wikipedia/en/wiki/File:Nokota_Horses_cropped.jpg that's ok, it's using HTTPS. [131096690070] |BUT if i click on the picture: http://upload.wikimedia.org/wikipedia/commons/d/de/Nokota_Horses_cropped.jpg it's using HTTP! [131096690080] |So this is not good. [131096690090] |Are the pictures on a HTTP-only server, or i can write another redirect rule to view the pictures in HTTPS? [131096700010] |Privoxy's redirect action uses limited regular expressions to match and rewrite urls. [131096700020] |Luckily, backreferences are supported. [131096700030] |You can rewrite your redirect match in such a way to support two-letter language codes, plus the single three letter code you mentioned: [131096700040] |I've replaced your original two letter language code with "(..|war)". [131096700050] |The parentheses create a backreference which can later be referred to as "$1". [131096700060] |The two dots match any two characters. [131096700070] |The pipe character is a logical "or" operator, making matches against strings on either side. [131096700080] |You can use the pipe multiple times within a match group. [131096700090] |You can use multiple backreferences in a single regex. [131096700100] |Increment the number used to refer to the match (ie. $2, $3, etc). [131096700110] |The Privoxy user manual appendix describes support of regular expressions and there are more useful examples there. [131096700120] |For your second question, you will have to write additional redirect actions for each url you want to redirect to HTTPS. [131096700130] |This will be cumbersome, as you will have to tune your regexps for each site's url patterns, and the site must of course offer the content over SSL also. [131096710010] |Is there a user interface in Emacs allowing one to "grab" the buffer's filename conveniently? [131096710020] |It happens quite often that I want to use the path of the file opened in a certain buffer in Emacs (either the full path or the basename) in another place (a buffer or a different X program, say, a terminal). [131096710030] |I wonder whether there is some pre-defined subsystem in the Emacs "user interface" that would copy the filename of the current buffer ((buffer-file-name)) to the kill-ring. [131096710040] |Related things: There is a simple command in emacs-w3m that does an analoguos thing (y -- w3m-print-current-url): it prints the URL and copies it to the kill-ring. [131096710050] |Of course, I could simply define the command I want, but I'm asking this question because I hope to learn some user interface subsystem of Emacs that includes such a possibility among other features. [131096710060] |(Perhaps, some buffer and path manipulation interfaces.) [131096710070] |So that I will know more useful features of Emacs. [131096720010] |The quickest way to copy the name of the current file in the default setup is [131096730010] |Is there a convenient general way to "grab" the echoed result of a command in Emacs (of M-: or M-!)? [131096730020] |Sometimes, I want to insert the result of an Emacs command (that has been echoed in the echo area) to another buffer or another running X program. [131096730030] |So, I'd like to put it to the kill-ring. [131096730040] |What would be a convenient way to do this? [131096730050] |For example: I could run a query with a shell command while in dired mode, say: !rpm -qf (to find out which package owns the selected file in the directory listing), and then want to insert the result somewhere else. [131096730060] |Or, another example: if I needed the filename of the current buffer (as in Is there a user interface in Emacs allowing one to "grab" the buffer's filename conveniently?), and there was not yet any predefined command for this, I could at least do M-:(buffer-file-name) and then use this general-purpose way to copy the shown result to the kill-ring in order to paste it later. [131096730070] |(Of course, I could eval (kill-new (buffer-file-name)), but this example here is to illustrate what kind of general-purpose way to do the grabbing of the echoed result I'm looking for.) [131096740010] |All messages echoed in the message area are saved in the *Messages* buffer, so just switch to it (C-h e, view-echo-area-messages) and select what you want. [131096740020] |If you want to get the value an expression that doesn't depend on the current buffer, you can also switch to the *scratch* buffer. [131096740030] |Type your expression and press C-j (eval-print-last-sexp). [131096750010] |Type C-u before either M-: or M-! to get the result inserted instead of sent to the echo area. [131096750020] |To get things directly into the kill ring, you need to dabble in Elisp. [131096750030] |Something like this (untested): [131096760010] |Numbering convention for the linux kernel? [131096760020] |What is the convention for numbering the linux kernels? [131096760030] |AFAIK, the numbers never seem to decrease. [131096760040] |However, i think I've seen three kinds of schemes [131096760050] |
  • 2.6.32-29
  • [131096760060] |
  • 2.6.32-29.58
  • [131096760070] |
  • 2.6.11.10
  • [131096760080] |Can anybody explain what are the interpretations of these numbers and formats? [131096770010] |"Linux kernel version numbering" at wikipedia: http://en.wikipedia.org/wiki/Linux_kernel#Version_numbering [131096780010] |2.6.32-29: 2.6.32: base kernel, -29 final release by ubuntu [131096780020] |2.6.32-29.58: 2.6.32: base kernel, -29.58 ongoing release (-29) by ubuntu [131096780030] |2.6.11.10: 2.6.11: base kernel, .10 tenth patch release of it. [131096780040] |(2.6.11 was chosen by volunteers (read Greg KH) to be a "long term maintenance" release). [131096790010] |NetBSD 5.1 NDIS Kernel Compile Error [131096790020] |Hello, everyone! [131096790030] |I have an old Toshiba Satellite 4015CDT, with Pentium II MMX, 32MB RAM, 4GB HDD. [131096790040] |It also has one USB 1.0 port, parallel and serial ports, a 3.5" floppy drive and a CD-ROM drive (almost dead). [131096790050] |I've installed NetBSD on this machine (full install) and now I want to connect it to the Internet. [131096790060] |Although it has one PCMCIA modem card in it, it is obviously not an option. [131096790070] |In order to do that I tried to connect my Realtek RTL8192U wireless adapter, but although the system gives me the device's name (through dmesg) it fails to recognize it as a network adapter. [131096790080] |I followed the instructions here and here to recompile the kernel with NDIS support (I gave this configuration the name "NDIS"). [131096790090] |At compile time I got the following error messages: [131096790100] |I would very much appreciate any help on this matter. [131096790110] |Thanks in advance. [131096790120] |P.S. [131096790130] |If it is of any help, the system was installed by first booting off the installation floppies ("boot1.fs" and "boot2.fs" downloaded from the official ftp server), and then the packages were copied from a USB flash drive (packages also downloaded from the ftp server). [131096800010] |How to replace quotation marks in a file with sed? [131096800020] |I have a file that contains multiple lines of xml. [131096800030] |I would like to replace certain parts of the file. [131096800040] |Some parts of the file contains quotation marks (") which I would like to replace. [131096800050] |I have been trying to escape the quotation mark with \, but I don't think this is working based on the result of my file. [131096800060] |Here is an example of one of my sed commands: [131096800070] |Is this how you escape quotation marks in a sed command or am I doing something wrong? [131096810010] |Two tips: [131096810020] |
  • You can't escape a single quote within a single quote. [131096810030] |So you have to close the quote, add an escaped quote, then open the quotes again. [131096810040] |That is: 'fooo'\''bar'.
  • [131096810050] |
  • (optional) You don't necessarily have to use / in sed. [131096810060] |I find that using / and \ in the same sed expression makes it difficult to read.
  • [131096810070] |For example, to remove the quotes from these files: [131096810080] |$ cat /tmp/f aaa"bbb"'ccc'aaa [131096810090] |Given my two tips above, the command you can use to remove both double and single quotes is: [131096810100] |sed -e 's|["'\'']||g' /tmp/f [131096810110] |If you ignore the funky features I used, this translates to: s/['"]//g. [131096810120] |It means remove all characters matching either ' or ". [131096810130] |You probably need something more complex than this to do what you want, but it's a start. [131096810140] |I hope it helps. [131096820010] |Negatives/downsides of Ubuntu's no-root sudo setup [131096820020] |By default, Ubuntu does not create a root account that the user can login to. [131096820030] |Instead users use sudo when they need to perform actions that require administrative access. [131096820040] |Obviously, the people at Ubuntu feel like this is a good idea, and everything I've read on the Ubuntu site gives plenty of reasons why they think it's a good idea (see RootSudo @ the Ubuntu Wiki, for example). [131096820050] |However, many other mainstream distributions, such as Debian, Gentoo, etc. don't set things up this way by default, and I'm trying to figure out why. [131096820060] |If Ubuntu's default sudo-root setup is such a good idea, why aren't all of the other mainstream distros doing it too? [131096820070] |This leads me to believe that there might be strong reasons for NOT setting it up this way; but I'm having trouble finding anything that gives any details on this. [131096820080] |All I'm finding is articles/posts that talk about how great it is ... [131096820090] |So my question is: Are there any major problems with Ubuntu's sudo setup (insecurity, functional limitations, etc.) that prevent other distributions from using this setup, and if so what are they? [131096820100] |Of course, it might simply be a great idea that other distros have just been slow to pick up on, or resistant to because it's different from how things have worked for the past 30 years. [131096820110] |If that's the case, I'd like to know that to. [131096830010] |Basically, sudo gives you administrative privileges. [131096830020] |If you have no root account, and your user have ALL=(ALL) ALL in /etc/sudoers, then your user is a superuser. [131096830030] |It's not recommended since you can execute commands with administrative privileges using sudo and YOUR user password. [131096830040] |Administrator MUST configure sudo to admit some commands for the users, but some administrative tasks will be root only, so root is the real superuser. [131096830050] |The real problem is that privileges are a privilege. [131096830060] |The MOST secure system will have just 1 superuser, and it's root. [131096830070] |See this [131096840010] |In those other distros you mention, it is advisable to create (or rather uncomment) a line in /etc/sudoers, that gives the group wheel unlimited access, and then add yourself to that group. [131096840020] |This is not only exclusive to Linux, but also standard on *BSD, and I believe even on Mac OS X (not sure about the last one, as I don't use Macs regularily). [131096850010] |As mattdm pointed out, this question (for the most part) has already been answered here: Which is the safest way to get root privileges: sudo, su or login? [131096860010] |The only non subjective disadvantage I know if is that when the root user does not have a password set it allows access to single user mode without a password. [131096860020] |A truly secure machine will be in a locked case, have booting from removable media disabled, a BIOS password set to prevent changes, a bootloader password set to prevent the kernel boot cmdline from being changed (so no adding init=/bin/sh), and a password would be required to access single user mode. [131096870010] |Are all system call error numbers unique? [131096870020] |I'm writing a program for Systems Programming in Unix, and one of the requirements is to process all possible error returns from system calls. [131096870030] |So, rather than having a function tailored to each system call, I'd like to have one function take care of that responsibility. [131096870040] |So, are all error number returns unique? [131096870050] |If not, what areas of overlap exist? [131096880010] |There are two aspects: the ways system calls signal that an error occurred, and the way what error occurred is reported. [131096880020] |Most system calls signal that an error occurred by returning -1, but this is not completely universal (for example, some system calls are always successful, e.g. getpid). [131096880030] |If you know an error occurred, the error code is always in errno¹. [131096880040] |There are standard values defined in errno.h, and every unix variants adds a few of its own. [131096880050] |Error codes are known by constants whose name begins with E; the numeric values vary from OS to OS. [131096880060] |These error codes are standard (e.g. EACCESS always means “permission denied”, EIO always means “input/output error”, …), but what precisely each error message means depends on the system call. [131096880070] |The standard functions strerror and perror provide error messages that you can display to a user. [131096880080] |¹ Note that if no error occurred during the last system call or C library function call, errno may contain garbage. [131096890010] |The only overlaps I'm aware of are synonyms in areas that historically have differed between AT&T and BSD-derived Unixes. [131096890020] |For example, AT&T Unix's EAGAIN means the same thing as BSD's EWOULDBLOCK, so they have the same value on systems that define both. [131096900010] |No, there should be no overlaps in errno.h on a given system. [131096900020] |Check your errno.h (most likely somewhere under /usr/include) for defines starting with E, as in ENOENT and such, and make a switch() statement handling each case. [131096900030] |Then you can call your own function for all system call errors. [131096900040] |(Sounds a lot like you're implementing perror(3). [131096910010] |Hibernate and security considerations [131096910020] |How do Linux and OSX handle sensitive memory pages (e.g. cryptographic keys) when running OS is suspended to disk? [131096910030] |If the memory image written is encrypted, how are its keys handled? [131096920010] |When you hibernate a computer all the memory (including all cryptographic keys) is written to the swap. [131096920020] |I can't speak for all the Linux distributions and I am not familiar with OSX but Ubuntu uses cryptsetup and LVM by default (on the alternate install CD). [131096920030] |The swap is a logical volume backed by the same encrypted physical volume that holds all the data. [131096920040] |When you boot, the initramfs ask for the password, opens the encrypted volume and restores the content from swap. [131096920050] |So in this case your keys are safe. [131096920060] |I recommend that you try it with your system before you put any sensitive data on it. [131096920070] |Check if the swap is encrypted and you can really resume from it. [131096920080] |If it's not encrypted or you can't resume don't use it. [131096920090] |Power down the computer instead. [131096920100] |Suspend to ram prevents the keys from hitting the disk but there are ways one can get past the screensaver password. [131096930010] |Wikipedia entry that explains what gets preserved and what gets turned off in different power states. [131096940010] |"Permission denied" for running an executable file on a linux machine. [131096940020] |I once generated an executable file using g++ on a linux box. [131096940030] |But when I try to run this executable file on another linux machine, the system said "Permission denied". [131096940040] |However, after I re-compile the source code on this second machine and run the executable file, the program just runs fine. [131096940050] |Can you let me know what may be the problem? [131096940060] |Thanks. [131096950010] |At a guess, you copied it over with a utility that doesn't preserve file modes. [131096950020] |Try chmod +x. [131096960010] |awk programming [131096960020] |A shell script which uses awk to read in the data file students.txt and output the data in the tabbed format as shown: [131096960030] |Surname Forename MSc stream Date of Birth [131096960040] |Smith John IT 15.01.1987 [131096960050] |Taylor Susan IT 04.05.1987 [131096960060] |Thomas Steve MIT 19.04.1986 [131096960070] |

    !/bin/sh

    [131096960080] |awk '{print $1, $2, $3, "stream", $4}' [131096970010] |How's a module approved to be included into the linux kernel? [131096970020] |I'm now compiling the linux kernel 2.6, and finding there are more than 1,000 modules in total. [131096970030] |How is a module approved to be included into the linux kernel? [131096980010] |A patch or a git pull request is submitted with a request for comments. [131096980020] |This is sometimes done to the kernel mailing list, but is frequently done on other lists pertaining to the subject of the patch first. [131096980030] |Sometimes discussion about a proposed module is brought up before any code is even written. [131096980040] |People ask why the patch is necessary, state their objections, and point out improvements that could be made. [131096980050] |This is an iterative process. [131096980060] |When the author is comfortable, he submits it to the Linux kernel mailing list during a time called the merge window. [131096980070] |The moment an official release is made, the opening of the merge window for the next version begins. [131096980080] |As part of the closing of the merge window, a patch is either accepted or not. [131096980090] |If the patch is accepted, the only further changes to that section of code that are allowed are bug fixes. [131096980100] |Also as part of the closing of the merge window, a new RC (release candidate) version of the kernel is released. [131096980110] |Almost always, people will have problems with the patch and bugs will need to be fixed or the patch will be reverted. [131096990010] |In Gentoo, what is the difference between amd64, ~amd64 and ~amd64-linux? [131096990020] |When I run equery depgraph www-client/chromium-10.0.648.151, not all dependencies are available. [131096990030] |Some shows M[package.mask], while some other shows [missing keyword]. [131096990040] |My ACCEPT_KEYWORDS is ~amd64-linux, according to emerge --info. [131096990050] |I experimented with different ACCEPT_KEYWORDS (as an environmental variable passed to eqeury), and all have different missing dependencies. [131096990060] |Among all possible combinations, only with ACCEPT_KEYWORDS='amd64 ~amd64 ~amd64-linux' all dependencies can be satisfied at once. [131096990070] |Here are my questions: [131096990080] |
  • Is ACCEPT_KEYWORDS='amd64 ~amd64 ~amd64-linux' a valid configuration?
  • [131096990090] |
  • I learned from the documentation that amd64 mean stable, and ~amd64 means unstable. [131096990100] |What about ~amd64-linux?
  • [131096990110] |
  • If I select ~amd64, equery wouldn't use the packages available only to amd64, resulting in missing dependencies. [131096990120] |Is this expected? [131096990130] |If so, should unstable testers use at least ACCEPT_KEYWORDS='amd64 ~amd64' instead of ACCEPT_KEYWORDS='~amd64'?
  • [131096990140] |
  • Does the order of the keywords matter?
  • [131096990150] |Additional info: I installed Gentoo Prefix following this guide. [131096990160] |By default, $EPREFIX/etc/make.profile is a symlink to $EPREFIX/usr/portage/profiles/prefix/linux/amd64 and contains a make.defaults that has ACCEPT_KEYWORDS="-amd64 ~amd64-linux". [131096990170] |Neither $EPREFIX/etc/make.conf nor $EPREFIX/etc/make.globals has ACCEPT_KEYWORDS configured. [131096990180] |According to eselect profile list, no profile is selected. [131097000010] |1. Is ACCEPT_KEYWORDS='amd64 ~amd64 ~amd64-linux' a valid configuration? [131097000020] |From man make.conf: [131097000030] |ACCEPT_KEYWORDS = [space delimited list of KEYWORDS] [131097000040] |So ACCEPT_KEYWORDS='amd64 ~amd64 ~amd64-linux' is a valid combination. [131097000050] |2. What about ~amd64-linux? [131097000060] |amd64-linux is a Prefix thing. [131097000070] |I don't know much about Prefix, but I can see amd64-linux in the list of valid keywords at /usr/portage/profiles/arch.list, in the section named "Prefix keywords". ~amd64-linux is just the testing counterpart of amd64-linux. [131097000080] |3. ACCEPT_KEYWORDS='amd64 ~amd64' vs just ~amd64 [131097000090] |If you have ~amd64 in your ACCEPT_KEYWORDS, portage will use all the latest ebuilds, which often contains a lot of unstable stuff. [131097000100] |I think that's why missing dependencies are to be expected. [131097000110] |For example, that can happen if you want to install software-a, and the latest one in the testing branch is software-a-2.3.4, which requires library-b-5.6.7, which doesn't have an ebuild yet. [131097000120] |Regarding amd64 ~amd64 and just ~amd64, they are the same, really, because if your architecture is amd64 you will have amd64 in ACCEPT_KEYWORDS, no matter what. [131097000130] |4. Does the order of the keywords matter? [131097000140] |No, because it's just a matter of whether your ACCEPT_KEYWORDS variable contains a certain keyword or not. [131097000150] |It's like a set (unordered). [131097000160] |Having used Gentoo for a while, I still don't have the dare to put ~amd64 in my ACCEPT_KEYWORDS. [131097000170] |It's so unstable it's really not recommended, setting it up for the first time guarantees a lot of breakage. [131097010010] |The ACCEPT_KEYWORDS environment variable is for allowing "all" not-yet-marked stable packages/versions for the current architecture to be built. [131097010020] |The ~ in front of an arch means unstable (not "completely" tested). [131097010030] |The often better approach is to use /etc/portage/package.keywords and list the package in there with the ~amd64 keyword if you really need the latest build. [131097010040] |By the way: amd64 firewall, isn't that how wikipedia defines overkill? [131097010050] |
  • Yes
  • [131097010060] |
  • Never heard of "~amd64-linux", but the ~ means "unstable".
  • [131097010070] |
  • No, it is not expected that "~amd64" excludes "amd64".
  • [131097010080] |
  • No.
  • [131097020010] |Multicasting in Linux [131097020020] |I want to do multicasting in linux for the video/audio/data file transmission. [131097020030] |Is there any utility/tool to do so ? [131097030010] |You can use VLC to do this, see http://tldp.org/REF/VideoLAN-Quickstart/x536.html for instance. [131097040010] |How is MIME type "text/html" registered to shared-mime-info? Where is its *.xml file? [131097040020] |On Ubuntu 10.10, which XML file under the directory /usr/share/mime/applications is for MIME type text/html? [131097040030] |I checked xml.xml and xhtml+xml.xml, but neither has a glob pattern of *.html. [131097040040] |So where is this file format entry? [131097040050] |Thanks in advance! [131097040060] |Amanda [131097050010] |This should help. [131097050020] |This search was done on Debian 6, but should be the same on Ubuntu. [131097050030] |From this you can see the information is in /usr/share/mime/globs, /usr/share/mime/globs2, /usr/share/mime/text/html.xml and /usr/share/mime/packages/freedesktop.org.xml [131097060010] |Hide referer with privoxy [131097060020] |How can i set a privoxy rule in the user.action file to block the referer information on any site? [131097070010] |Do you want to stop the REFERER header from your requests to all sites you access through privoxy? [131097070020] |You could try something like this: [131097070030] |But be advised that some sites may break if you have no Referer set. [131097080010] |How can I read line by line from a variable in bash? [131097080020] |I have a variable which contains multiline output of a command. [131097080030] |What's the most effecient way to read the output line by line from the variable? [131097080040] |For example: [131097090010] |References: [131097090020] |
  • Here strings
  • [131097090030] |
  • while
  • [131097090040] |
  • IFS
  • [131097090050] |
  • read
  • [131097100010] |I would go with: [131097100020] |(if the output from jobs could be >16MB, I would stick it in a temp file, and traverse that). [131097110010] |You can use a while loop with process substitution: [131097110020] |To read a multiline variable, a simple way is: [131097110030] |Also, please don't call your variable jobs because that is a shell command and may cause confusion. [131097120010] |To process the output of a command line by line: [131097120020] |If you have the data in a variable already: [131097120030] |printf %s "$foo" is almost identical to echo "$foo", but prints $foo literally, whereas echo "$foo" might interpret $foo as an option to the echo command if it begins with a -, and might expand backslash sequences in $foo in some shells. [131097120040] |Note that in some shells (ash, bash, pdksh, but not ksh or zsh), the right-hand side of a pipeline runs in a separate process, so any variable you set in the loop is lost. [131097120050] |For example, the following line-counting script prints 0 in these shells: [131097120060] |A workaround is to put the remainder of the script (or at least the part that needs the value of $n from the loop) in a command list: [131097120070] |If acting on the non-empty lines is good enough and the input is not huge, you can use word splitting: [131097120080] |Explanation: setting IFS to a single newline makes word splitting occur at newlines only (as opposed to any whitespace character under the default setting). set -f turns off globbing (i.e. wildcard expansion), which would otherwise happen to the result of a command substitution $(jobs) or a variable substitutino $foo. [131097120090] |The for loop acts on all the pieces of $(jobs), which are all the non-empty lines in the command output. [131097120100] |Finally, restore the globbing and IFS settings. [131097130010] |In recent bash versions, use mapfile or readarray to efficiently read command output into arrays [131097130020] |Disclaimer: horrible example, but you can prolly come up with a better command to use than ls yourself [131097140010] |How can I get an ELO Touch Screen to work? [131097140020] |I have bought a new touchscreen POS machine and I have installed fedora 14 on it. [131097140030] |I couldn't make the touch screen work, as the ELO touch manufacturers have drivers only for kernel 2.6.14 versions. [131097140040] |Even though Fedora 14 has precompiled kernel-level driver support for ELO touch screens, I am unable to get it working. [131097140050] |I have tried the xorg.conf configuration as well, but nothing is working. [131097150010] |org mode to dokuwiki converter [131097150020] |Is there an emacs org-mode to dokuwiki converter? [131097150030] |Is there an dokuwiki to emacs org-mode converter? [131097160010] |The generic exporter could be easily configured to export to dokuwiki. [131097160020] |But I can't answer the "back" question. [131097160030] |Very little converts to org at this point. [131097170010] |Device names for logical volumes [131097170020] |Consider this: [131097170030] |and this [131097170040] |(The OS is Centos 5.5 64-bit, the HW is IBM ServeRAID M1015 using an LSI MegaRAID BIOS) [131097170050] |Why does df use a long filesystem name instead of /dev/sda2? [131097180010] |Because /dev/sda2 is not a disk, it is a file on a disk. [131097190010] |df shows you mounted filesystems, which reside on block devices. fdisk is showing you the partition table on your /dev/sda block device. [131097190020] |Since you don't have a filesystem mounted directly on /dev/sda2, you won't see it appear in df output. [131097190030] |Your root filesystem (the first entry in df) is on an LVM logical volume, which, after consulting your fdisk output, is likely in turn on an LVM physical volume on /dev/sda2. [131097190040] |When comparing block device names in df output with those in output from the LVM management utilities, it helps to know that the kernel uses the full device name for df (here it's /dev/mapper/VolGroup00-LogVol00). [131097190050] |The device mapper creates convenient symlinks in /dev that correspond with your volume group names. [131097190060] |You can correlate the two outputs by ignoring the "mapper" portion of the name in df, and replacing the hyphen with a forward slash. [131097190070] |Running ls -al /dev/VolGroup00 will illustrate the relationship for you. [131097190080] |This doesn't really have anything to do with hardware raid. [131097190090] |These utilities would give you the same information regardless of controller type. [131097200010] |Why is deleting files by name painfully slow and also exceptionally fast? [131097200020] |Faux pas: The "fast" method I mention below, is not 60 times faster than the slow one. [131097200030] |It is 30 times faster. [131097200040] |I'll blame the mistake on the hour (3AM is not my best time of day for clear thinking :).. [131097200050] |Update: I've added a summary of test times (below). [131097200060] |There seem to be two issues involved with the speed factor: [131097200070] |
  • The choice of command used (Time comparisons shown below)
  • [131097200080] |
  • The nature of large numbers of files in a directory... [131097200090] |It seems that "big is bad". [131097200100] |Things get disoprportionately slower as the numbers increase..
  • [131097200110] |All the tests have been done with 1 million files. (real, user, and sys times are in the test scripts) The test scripts can be found at paste.ubuntu.com [131097200120] |I recently created and deleted 10 million empty test files. [131097200130] |Deleting files on a name by name basis (ie rm filename), I found out the hard way that there is a huge time difference between 2 different methods... [131097200140] |Both methods use the exact same rm filename command. [131097200150] |Update: as it turns out, the commands were not exactly the same... [131097200160] |One of them was sending 1000 filenames at a time to 'rm'... [131097200170] |It was a shell brace-expansion issue where I thought each filename was being written to the feeder file on a line of its own, but actually it was 1000 per line [131097200180] |The filnames are provide via a 'feeder file' into a while read loop.. [131097200190] |The feeder file is the output of ls -1 -f The methods are identical in all reaspects, except for one thing: [131097200200] |
  • the slow method uses the unsorted feeder file direct from ls -1 -f
  • [131097200210] |
  • the fast method uses a sorted version of that same unsorted file
  • [131097200220] |I'm not sure whether the sorting is ths issue here, or is it perhaps that the sorted feeder file just happens to match the sequence in which the files were created (I used a simple ascending integer algorithm) [131097200230] |For 1 million files, the fast rm filename method is 60 times faster than the slow method... again, I don't know if this is a "sorting" issue, or a behind-the-scenes hash table issue... [131097200240] |I suspect it is not a simple sorting issue, because why would ls -1 -f intentionally give me an unsort listing of a freshly added "sorted" sequence of filenames... [131097200250] |I'm just wondering what is going on here, so it doesn't take me days (yes days) to delete the next 10 million files :) .... [131097200260] |I say "days" because I tried so many alternatives, and the times involved increase disproportionatly to the numberof file involved .. so I've only tested 1 million in detail [131097200270] |BTW: Deleting the files via the "sorted list" of names is actually faster than rm -rf by a factor of 2. and: rm -r was 30 times slower than the "sorted list" method [131097200280] |... but is "sorted" the issue here? or is it more related to a hashing(or whatever) method of storage used by ext4? [131097200290] |The thing which quite puzzles me is that each call to rm filename is unrelated to the previous one .. (well, at least it is that way from the 'bash' perspective) [131097200300] |I'm using Ubuntu / bash / 'ext4' / SATA II drive. [131097210010] |rm -r is expected to be slow as its recursive. [131097210020] |A depth first traversal has to be made on the directory structure. [131097210030] |Now how did you create 10 million files ? did u use some script which loops on some order ? [131097210040] |1.txt,2.txt,3.txt... if yes then those files may too be allocated on same order in contigous blocks in hdd.so deleting on same order will be faster. [131097210050] |"ls -f" will enable -aU which lists in directory order which is again recursive. [131097220010] |You should optimize the filestructure. [131097220020] |So instead of [131097220030] |do something smarter like (bash assumed): [131097220040] |Now this example is rather slow because of the use of md5sum[1], use something like the following for much faster response, as long as you don't need any particular filenames, duplicates are of no concern and there is no need for a repeatable hash of a certain name :) [131097220050] |Of course this is all sloppily borrowing concepts from hashtables [131097230010] |Is it possible to simulate "no external access" from a Linux machine when developing? [131097230020] |Sometimes I upload an application to a server that doesn't have external internet access. [131097230030] |I would like to create the same environment in my machine for testing some features in the application and avoid bugs (like reading a rss from an external source). [131097230040] |I thought about just unplugging my ethernet cable to simulate, but this seems archaic and I don't know if I'm going to raise the same exceptions (specially in Python) when doing this compared to the limitations at the server. [131097230050] |So, how do I simulate "no external access" in my development machine? [131097230060] |Will "deactivating" my ethernet interface and reactivating later (with a "no hassle" command) have the same behavior as the server with no external access? [131097230070] |I'm using Ubuntu 10.04. [131097230080] |Thanks! [131097240010] |Deleting the default route should do this. [131097240020] |You can show the routing table with /sbin/route, and delete the default with: [131097240030] |That'll leave your system connected to the local net, but with no idea where to send packets destined for beyond. [131097240040] |This probably simulates the "no external access" situation very accurately. [131097240050] |You can put it back with route add (remembering what your gateway is supposed to be), or by just restarting networking. [131097240060] |I just tried on a system with NetworkManager, and zapping the default worked fine, and I could restore it simply by clicking on the panel icon and re-choosing the local network. [131097240070] |It's possible that NM might do this by itself on other events, so beware of that. [131097240080] |Another approach would be to use an iptables rule to block outbound traffic. [131097240090] |But I think the routing approach is probably better. [131097250010] |You wrote [131097250020] |So, how do I simulate "no external access" in my development machine? [131097250030] |How do I "deactivate" my ethernet interface and reactivate later with no hassle? [131097250040] |Are these two questions or one question? [131097250050] |I'm not sure what you mean by simulate "no external access". [131097250060] |However, to deactivate the ethernet interface you could simply do [131097250070] |or whatever your internet device is. [131097250080] |This will bring your ethernet interface down and up, respectively. [131097260010] |You could run your code in a virtual machine (User Mode Linux, VServer, OpenVZ, VirtualBox, VMWare, KVM, …) that you provide with only a host-only network interface (i.e. no routing from the VM to anywhere but the host machine). [131097260020] |If you run the application as a dedicated user appuser, you can restrict that user's network access. [131097260030] |Make sure you have iptables (Ubuntu: iptables ) and iproute2 (ip command) (Ubuntu: iproute , iproute-doc ) installed. [131097260040] |Then you can use iptables to mark outgoing traffic from processes running as appuser, and ip rule and ip route to set up an alternate routing table for that user. [131097260050] |(Note: untested. [131097260060] |See also more Linux IP packet mangling examples.) [131097270010] |How to comment multi-line commands in shell scripts? [131097270020] |When invoking long, switch-heavy commands, it's a good practice to write them in shell scripts. [131097270030] |Is there an easy way to comment lines in such scripts? [131097270040] |I've tried the following methods, but neither works. [131097280010] |I always moved the commented ones just after the command. [131097290010] |The issue is that the slashes are removed before the line is parsed, so the first command is parsed as if you'd written command #--bad-switch --good-switch. [131097290020] |If you have a really long sequence of commands, you could for example write a line-by-line comment block above or below it, explaining each in turn, or you could store the parameters in a variable (although that often gives quoting headaches with special characters). [131097300010] |This might be an option: store the command and args in an array, then execute it after [131097310010] |Modify AIX print queue error notification [131097310020] |It seems the default behavior of the AIX print queue is to report queue errors to the user that submitted the print job. [131097310030] |We have several hundred queues used by unattended scripts and cron jobs running under system accounts that are not intended to recieve mail. [131097310040] |What I would like to do, and have tried unsuccessfully to do, is to stop these error reports from being directed to the users that submit jobs, and instead direct them to another address. [131097310050] |This is what I have tried, with no effect: [131097310060] |This added the expected configuration to /etc/qconfig. [131097310070] |I restarted lpd, but error messages are still being sent to users. [131097310080] |Furthermore, error messages are not delivered to queue_errors@example.com. [131097310090] |This is on AIX 5.3. [131097310100] |A typical error that I'd like to redirect: [131097310110] |I know I can adjust rembak to try to avoid errors due intermittent remote queue downtime, but how can I configure lpd in such a way as to direct queue errors to an address other than the user's? [131097320010] |Since this has been sitting out there for a couple days, and in that time I've (mostly) figured out the problem, I'll post the answer that works for me. [131097320020] |In brief: [131097320030] |Where QUEUNAME, DEVICENAME and USERNAME are set to the queue, device and user to whom you'd like to have errors sent. [131097320040] |In full: [131097320050] |AIX print queues have virtual printers and print devices associated with each queue. [131097320060] |The command chque, as given in the question, is used to manage attributes of the queue. [131097320070] |Setting "recovery_type" to "sendmail address@example.com" will cause notification to be sent the the address specified when the queue is down, but it won't stop all printer errors from being dispatched to the user that submitted the print job. [131097320080] |By default, virtual printer error messages will be sent to the job submitter via the writesrv daemon. [131097320090] |Writesrv will issue the messages to the user's console if they are logged in. [131097320100] |If they are not logged in at the time of the error, or if the writesrv daemon on the remote host (if applicable) is not listening, an email will be sent to the user at the host from which the job was sent. [131097320110] |In order to completely answer the question, you have to set the si parameter in the virtual printer colon file via the chvirprnt command with the name of a user to receive errors, and also arrange for the local MTA to forward mail for that user to queue_errors@example.com. [131097320120] |It is advisable to create a user for this purpose, or send errors to root and further refine mail delivery for the root user to route the error messages as desired. [131097330010] |Restoring backup files [131097330020] |Hi, I have a directory in which are listed some .py files and some .py.bak files. [131097330030] |I want to delete the .py files and restore the backup file renaming them *.py. [131097330040] |Is there a shell script that can do this? [131097330050] |Thank you, rubik [131097340010] |This should do. [131097340020] |Be sure to test on files you wouldn't mind losing. [131097350010] |Selective recursive move? [131097350020] |Is there a command like [131097350030] |which creates dst/1 and dst/2/3? [131097350040] |It should work similar to mv src/* dst, but move only the subtrees listed. [131097360010] |Assumes bash and GNU find. [131097380010] |Under Linux, using rename from the Linux utilities (rename.ul under Debian and Ubuntu): [131097380020] |With the rename Perl script that Debian and Ubuntu install as prename or rename: [131097380030] |Here's a shell function that does what you're asking except for the argument order: [131097390010] |Setting a Multi-Terminal Linux Server [131097390020] |So Im a complete newb when it comes to enterprise level linux distros, and linux servers in general. [131097390030] |I know my way around Most Linux Desktops, but Im going to be setting up a small Linux Server that multiple people would be able to Terminal into (probably through SSH or Putty) [131097390040] |How would I go about doing this (storing the users/passwords and such)....And is there a good FREE distro to do this? [131097390050] |I was looking at Ubuntu Server, I was gonna do Centos but im a little bit iffy as their latest release is taking a LONGGG time. [131097390060] |(We use Red Hat Enterprise 5.3 at work....but obviously I can't afford that lol) [131097390070] |Thanks all. [131097390080] |edit: Also how do you make like "names" for the server, so instead of 164.25.252.35 (or w/e ip, i just made that one up) [131097390090] |it could be something like tron.dev.sauron.com or something.... (ya im a NUB) [131097400010] |First of all, your users should not be using passwords to log in to SSH, and should be using keys+passphrases, unless you absolutely must use passwords for some reason. [131097400020] |For general information on how to set up SSH, I would look into specific information for setting up SSH on whatever distribution you end up choosing (most of them will have a tutorial on their site), or just google for "How to set up SSH". [131097400030] |Ubuntu Server is an excellent server distribution (which powers extremely high-traffic servers, such as those that Wikipedia runs on) and has packages for everything you'd need to do this (openssh-server, etc.) They also have very regular releases, so if you're worried about slow release cycles, this will not be a problem. [131097400040] |As far as how names like tron.dev.sauron.com get converted to IP addresses, this is known as domain name resolution. [131097400050] |If you are trying to set up a remote server for people to log in to, you're going to need to register a domain name and either (a) run a DNS server yourself, or (b) use a DNS service that will route it to the proper IP. [131097400060] |(See this for more info: http://www.boutell.com/newfaq/creating/domainathome.html). [131097400070] |The latter is likely a much better option. [131097410010] |Change file creation time on a FAT filesystem [131097410020] |I need a way to change the creation time of a file on a mounted FAT32 volume. [131097410030] |I have to do that because my MP3 player will only read files sorted by this creation time. [131097410040] |If I can find a way to set the file creation time (like touch can do with modification / access time) of a file, a trivial script will allow MP3 files to be read in the right order (as expected, alphabetically). [131097410050] |But I've yet to find a solution, and my searches have been in vain. [131097410060] |I hope you guys can help me ! [131097420010] |First thing that comes to mind is to mv the file(s) to a temporary, cp the temporary file to the old filename and delete the temporary. [131097420020] |I just made a fast check: [131097420030] |returns: [131097420040] |wait some minutes and then: [131097420050] |notice the creation time isn't modified, while: [131097420060] |now foo is created at current time! [131097420070] |EDIT [131097420080] |sorry, forgot to mention, tried this on a FAT32 formatted USB stick under Ubuntu. [131097430010] |All commands that should be used to connect to wifi in command line [131097430020] |I would like to switch from gnome to awesome and I would like to connect my wifi network in command line (instead of using gnome tools). [131097430030] |So, I searched on the internet and found approximatively the same methods : [131097430040] |When I run awesome alone and execute those commands, I'm tucked at the 3rd one. [131097430050] |It looks like DHCP cannot be resolve (the command continue running endlessly). [131097430060] |But when I run awesome in Xephyr and execute the same commands, dhclient wlan0 works well. [131097430070] |So I suppose Gnome do something that I don"t but I can't find what. [131097430080] |Could someone help me ? [131097440010] |Looks like maybe you're not setting the channel and "mode". [131097440020] |I use a simple script that does these shell commands: [131097440030] |You have to put in your key, and access point MAC address. [131097440040] |This is under Slackware 13.1, and I'm using a WRT54GL running DD-WRT. [131097440050] |Much to my shame, I'm using WEP encryption. [131097440060] |I had to do some experimenting about when to "ifconfig" and wlen to "iwconfig" and in what order to set ESSID and AP. [131097450010] |How to find out from the logs what caused system shutdown? [131097450020] |E.g. [131097450030] |I'm seeing this in /var/log/messages: [131097450040] |Is there a way to find out what caused the shutdown? [131097450050] |E.g. was it run from console, or someone hit power button, etc.? [131097460010] |I have just a clumsy idea, but maybe it works for you: enter the command last and check out the login informations for all of the users. then, filter th users with the permission required for halt that had been logged in at that moment. then check out their .bash_history file to see if they have entered halt or not. [131097470010] |Some possible log files to explore: (found a Ubuntu system, but I would hope that they're present on most Linux/Unix systems) [131097470020] |Again, these log files are present on a Ubuntu system, so filenames may be different. [131097470030] |The tail command is your friend. [131097480010] |Hi. [131097480020] |Try the following commands: [131097480030] |Display list of last reboot entries: last reboot | less [131097480040] |Display list of last shutdown entries: last -x | less [131097480050] |or more precisely: last -x | grep shutdown | less [131097480060] |You won't know who did it however. [131097480070] |If you want to know who did it, you will need to add a bit of code which means you'll know next time. [131097480080] |I've found this resource online. [131097480090] |It might be useful to you: [131097480100] |How to find out who or what halted my system [131097480110] |Nico