[131047250010] |determining path to sourced shell script [131047250020] |Is there a way for a sourced shell script to find out the path to itself? [131047250030] |I'm mainly concerned with bash, though I have some coworkers who use tcsh. [131047250040] |I'm guessing I may not have a ton of luck here, since sourcing causes commands to be executed in the current shell, so $0 is still the current shell's invocation, not the sourced script. [131047250050] |My best thought currently is to do source $script $script, so that the first positional parameter contains the necessary information. [131047250060] |Anyone have a better way? [131047250070] |To be clear, I am sourcing the script, not running it: [131047260010] |I think that you could use $BASH_SOURCE variable. [131047260020] |It returns path that was executed: [131047260030] |So in next step we should check if path is relative or not. [131047260040] |If it's not relative everything is ok. [131047260050] |If it is we could check path with pwd, concatenate with / and $BASH_SOURCE. [131047270010] |Actually, "dirname $0" will get you the path to the script, but you have to interpret it a bit: [131047270020] |You have to prepare to handle "." as the directory name under some common circumstances. [131047270030] |I'd experiment a bit, as I remember the dirname built-in to ksh doing things a bit differently when "." appears in PATH. [131047280010] |In tcsh, $_ at the beginning of the script will contain the location if the file was sourced and $0 contains it if it was run. [131047280020] |In Bash: [131047290010] |For thoroughness and the sake of searchers, here is what these do... [131047290020] |It is a community wiki, so feel free to add other shell's equivalents (obviously, $BASH_SOURCE will be different). [131047290030] |test.sh: [131047290040] |test2.sh: [131047290050] |

Bash:

[131047290060] |

Dash

[131047290070] |Replacing $BASH_SOURCE with $DASH_SOURCE resulted in the same thing. [131047300010] |Debugging ethernet before NFS boot [131047300020] |I'm trying to boot Linux from U-boot on an embedded ARM board using a filesystem on a remote machine served via NFS. [131047300030] |It appears that the ethernet connection is not coming up correctly, which results in a failure to mount the NFS share. [131047300040] |However, I know that the ethernet hardware works, because U-boot loads the kernel via TFTP. [131047300050] |How can I debug this? [131047300060] |I can try tweaking the kernel, but that means recompiling the kernel for every iteration, which is slow. [131047300070] |Is there a way that I can make the kernel run without being able to mount an external filesystem? [131047310010] |You can compile a initrd image into kernel (General Setup -> Initial RAM filesystem and RAM disk (initramfs/initrd) support -> Initramfs source file(s)). [131047310020] |You specify file in special format like (my init for x86): [131047310030] |I haven't used it on ARM but it should work. /init is file you are can put startup commands. [131047310040] |Rest are various files needed (like busybox etc.). [131047320010] |A few things that come to mind: [131047320020] |
  • Use tcpdump, wireshark or other Ethernet packet inspector to see whether the board is sending packets to the wrong address or not sending anything at all.
  • [131047320030] |
  • What do you have on the serial console (if there is one)?
  • [131047320040] |
  • Try connecting a remote kernel debugger.
  • [131047320050] |
  • Try running inside a simulator, if you have a simulator that you can reproduce your problem in.
  • [131047320060] |
  • Instead of just fetching a kernel, put a boot-and-root filesystem in flash memory, or load a root filesystem to a RAM disk.
  • [131047330010] |This post is regarding the network issues brought up in the question, not about kernel debugging. [131047330020] |If your switch supports Spanning Tree Protocol (STP), keep in mind that STP may not activate the the Ethernet port on the switch for 6 seconds or more while STP does it's work. [131047330030] |This delay may start over every time the host resets the Ethernet port on the host, which can happen multiple times between power-up, the DHCP request, when the Kernel loads the network drivers, etc. [131047330040] |This can interfere with NFS boots for diskless systems, DHCP, kickstart, etc. and has caused plenty of headaches for many sysadmins. [131047330050] |For some examples, see RedHat Bug 189795 - DHCP timeouts during Kickstart , and this PXE Guide. [131047330060] |Most high end switches such as Cisco switches and HP ProCurve switches do support STP, and it's enabled for all ports out of the box. [131047340010] |Does RHEL execute all cronjob files under /etc/cron.d/*? [131047340020] |This is a RedHat Enterprise Linux 5 system (RHEL). [131047340030] |We manage this system using Cfengine. [131047340040] |We have several cronjobs which are running twice as often as usual. [131047340050] |I checked the cronjobs under /etc/cron.d/ and this directory contains the actual script called host-backup, and also contains a cfengine backup file called host-backup.cfsaved, as so: [131047340060] |Does this operating system execute all files at /etc/cron.d/*, or does it only execute files which match a certain pattern. [131047340070] |Can I configure this, and where is this defined? [131047340080] |I cannot find this answer in the RHEL documentation. [131047350010] |I think the reason you've had difficulty tracking down the answer is that it's not a Red Hat-specific question. [131047350020] |The problem you're seeing is part of the standard functionality of cron - each file in the directory you identify is automatically treated as a separate job. [131047350030] |So, the short answer to your question is "yes, all files are executed". [131047350040] |This is not something I think that can be configured. [131047360010] |(If you're paying for Red Hat support, you should ask them this kind of questions. [131047360020] |This is exactly what you're paying for!) [131047360030] |From the RHEL5 crontab(5) man page: [131047360040] |If it exists, the /etc/cron.d/ directory is parsed like the cron spool directory, except that the files in it are not user-specific and are therefore read with /etc/crontab syntax (the user is specified explicitly in the 6th column). [131047360050] |(Is there a simpler way of reading RHEL man pages without having access to it? [131047360060] |At least this way I could see that this paragraph is part of the Red Hat patch, so it's not a standard Vixie Cron 4.1 feature.) [131047360070] |Looking at the source, I see that the following files are skipped: .*, #*, *~. *.rpmnew, *.rpmorig, *.rpmsave. [131047360080] |So yes, your *.cfsaved files are read in addition to the originals. [131047370010] |Here is the answer from RedHat support: [131047370020] |Please be informed that all files under cron.d directory are examined and executed, it's basically an extension of /etc/crontab file (ie; same effect if you add the entries to /etc/crontab file) [131047370030] |So, to answer my question "Does this operating system execute all files at /etc/cron.d/*, or does it only execute files which match a certain pattern. [131047370040] |Can I configure this, and where is this defined?" [131047370050] |All files under /etc/cron.d/* are executed (Although it seems that certain file extensions such as .rpmsave, *~, etc are ignored, according to documentation in the source files). [131047370060] |It is not possible to configure this via a configuration file. [131047370070] |Configuring this is probably possible if the source is recompiled. [131047370080] |This behavior is mentioned in the documentation contained with the source, but doesn't appear in any manual or man page that I can find. [131047380010] |Open a RPM on a Mac? [131047380020] |I am on a MacBook Pro running Apple Leopard (Mac OS X 10.5.8). [131047380030] |I would like to unpackage a RPM and view the files contained within the wget-1.11.4-2.el5_4.1.src.rpm . [131047380040] |I don't need to install the files to a particular location or run any %postinstall scripts or anything. [131047380050] |I just want to unpackage this RPM so that I can view the source files underneath. [131047380060] |Is it possible to unpackage a RPM file on a non-RedHat/CentOS system? [131047390010] |Yes you can. [131047390020] |For sure Midnight Commander opens RPM files. [131047390030] |I don't use but google list some links that might point at GUI. [131047400010] |You can install rpm through Darwin Ports or Fink or Mac Ports or even a Darwin port, rpm4darwin. [131047400020] |To extract files from an rpm package without installing it, use the companion utility rpm2cpio, e.g. [131047400030] |There's also a portable rpm2cpio script if you don't want or can't get the version that's bundled with the rpm utility (the script may not work with older or newer versions of the rpm format though). [131047410010] |I would think that (like Windows and Linux) any archiver program should be able to decompress it. iArchiver, the unArchiver, and Archiver all list "read-only RPM" in their supported formats. [131047420010] |Can I limit a process to a certain amount of time / CPU cycles? [131047420020] |We have a script which runs on our web servers, triggered by customer action, which initiates a unix process to generate some cache files. [131047420030] |Because this process acts upon files supplied by our customer, it sometimes misbehaves, running so long that the PHP process which spawns it times out or using so much CPU time that a sysadmin will kill it. [131047420040] |Is there any command which I could run which would limit the CPU time / runtime of the process? [131047420050] |I am looking for a command like /usr/bin/time, where I could run that command and pass it the commandline I want it to run and limit. [131047430010] |From within a program, call setrlimit(RLIMIT_CPU, ...). [131047430020] |From the shell, call ulimit -t 42 (this is not standard but supported by most shells (including bash and ksh) on most unix variants). [131047430030] |This causes the current process to be killed once it has used up N seconds of CPU time. [131047430040] |The limitation is inherited by child processes. [131047430050] |A common shell idiom is (ulimit -t 42; runaway_process) if you want to be able to run other unlimited processes afterwards from the same shell. [131047430060] |See also Is there a way to limit the amount of memory a particular process can use in Unix? . [131047430070] |The principle is the same, you're just limiting a different resource. [131047440010] |In addition to Gilles answer there is cpulimit tool that does exactly what you want - including modifing in runtime. [131047440020] |Additionally it can limit to only certain CPUs/Cores IIRC. [131047450010] |How do you get 'instant' terminal feedback for a c program? [131047450020] |The problem I have is that ncurses (or some routine lower than ncurses) waits around for a while, before passing the 'space' character to my program. [131047450030] |(I think this is called a 'typematic' delay.) [131047450040] |I have written a very simple program that uses the ncurses library to demonstrate what my problem is, and what I hope to achieve in the end. [131047450050] |Compile it, and run it, and you will see that the hash in the leftmost column moves up or down depending on whether the spacebar is pressed or not. [131047450060] |However, there is a lag of about a second, which is what I am attempting to get rid of, because I want instant input. [131047450070] |There is a way of improving the problem by changing the 'repeat rate' (or something like that) of the terminal but this never seems to completely alleviate the problem. [131047450080] |I would like instant input, and I don't know how to achieve this. [131047450090] |Does anybody else know? [131047460010] |Is PAE worth it when I have 4GB RAM? [131047460020] |I have 4GB RAM installed on my machine, and I'm considering using all of it (IE, installing PAE-enabled kernel). [131047460030] |I heard there's a performance penalty for this, so I wanted to know about other's experiences. [131047460040] |Should I proceed, or should I remain content with 3GB? [131047460050] |[note] I will be running Linux 2.6.32. [131047470010] |If you have a 64-bit processor, an alternative would be to try a 64-bit kernel. [131047470020] |According to this RedHat white paper, a typical server experiences around 1% performance hit, and other tasks suffered a performance hit of 0% - 10%. [131047470030] |In addition to having more available memory, enabling PAE means you have an NX bit, which can increase security. [131047480010] |interactive shell vs non-interactive shell [131047480020] |What are interactive and not-interactive shell?? [131047480030] |Questions: Create a user john who should not get an interactive shell. [131047480040] |How can we do this ?? [131047490010] |Yes, change the shell in the password file (/etc/passwd) to some program that will not not allow a shell escape. [131047490020] |if you want to be a bofh /bin/false will do exactly what you want. [131047500010] |The /etc/passwd file has as the last item on a user's line the program to be run upon login. [131047500020] |For normal users this is typically set to /bin/sh or other shell (e.g. bash, zsh). [131047500030] |Traditionally, identities that are used to own processes or files or other resources have their "shell" set to /bin/false as in [131047500040] |The pseudo-user syslog owns /var/log/syslog and is the only UID which has write permission for that file, but one cannot log in as user syslog as there is no command interpreter to be run. [131047510010] |Regarding the question you are trying to answer: [131047510020] |Create a user john who should not get an interactive shell. [131047510030] |The question means "Create a user named john who won't be able to log in and run commands from a shell". [131047510040] |Interactive describe the way the shell works: the user types something, and the shell does something accordingly (there is a kind of communication between the user and the shell). [131047510050] |The word "interactive" doesn't really add any information to the question, because as long as there is a user typing commands, the shell session is interactive. [131047510060] |There is also the non-interactive mode, where the user saves a series of commands ey wants to run in a file (called a shell script), and executes the file afterwards. [131047510070] |Some commands behave differently depending on whether it is run interactively or non-interactively. [131047510080] |You can read more here (this document is for bash, but the same concept applies for other shells). [131047510090] |On how to create such a user, if you skim through man useradd there is an option to set the login shell, -s or --shell. [131047510100] |You can also create a user normally, in anyway you want (which I assume you know already?), then edit /etc/passwd like in msw's answer. [131047520010] |bind the MAC with IP address?? [131047520020] |I have configured a DHCP server on Linux. [131047520030] |My goal is to map client IP address with their MAC address in such a way that no "unregistered" client machine gets a response from the DHCP server. [131047520040] |That is, I would like to give them Internet access one by one through IP and MAC binding. [131047530010] |The dhcp.conf file has the ability to specify group parameters. [131047530020] |In this group you can specify specific hardware addresses. [131047530030] |From the man page: [131047540010] |As noted by Warren and Shawn, your question seems to imply that preventing address assignment to unregistered machines is intended to keep them off the net. [131047540020] |You cannot increase security this way as an machine can either: [131047540030] |
  • find a "trusted" MAC address and pretend to have that MAC to get an IP address from the DHCP server, or
  • [131047540040] |
  • Just pick its own IP address and skip DHCP completely
  • [131047540050] |More generally, you cannot enforce exclusion at the IP layer as it has no mechanism designed to do this. [131047540060] |If you really need to enforce exclusion at the Internet Layer you need to use something like IPSec which is far more complicated than DHCP. [131047540070] |Far more common is to secure access at the Session Layer or above; for example SSH. [131047550010] |how do you sort du output by size? [131047550020] |How do you sort du -sh /dir/* by size? [131047550030] |I read one site that said use | sort -n but that's obviously not right. [131047550040] |Here's an example that is wrong. [131047560010] |You can use du -sh * | sort -h. [131047560020] |This tells sort that the input is the human-readable format. [131047560030] |This feature was added recently to GNU Core Utilities 7.5 in Aug 2009, so many distributions do not yet have it. [131047570010] |This little Perl script does the trick. [131047570020] |Save it as duh (or whatever you want) and call it with duh /dir/* [131047580010] |Try using the -k flag to count 1K blocks intead of using human-readable. [131047580020] |Then, you have a common unit and can easily do a numeric sort. [131047580030] |You don't explictly require human units, but if you did, then there are a bunch of ways to do it. [131047580040] |Many seem to use the 1K block technique above, and then make a second call to du. [131047580050] |http://serverfault.com/questions/62411/how-can-i-sort-du-h-output-by-size [131047590010] |If you don't have sort -h you can do this: [131047590020] |This gets the du list, separates the suffix, and sorts using that. [131047590030] |Since there is no suffix for <1K, the first sed adds a B (for byte). [131047590040] |The second sed adds a delimiter between the digit and the suffix. [131047590050] |The third sed converts G to Z so that it's bigger than M; if you have terabyte files, you'll have to convert G to Y and T to Z. Finally, we sort by the two columns, then we replace the G suffix. [131047600010] |If you don't have a recent version of GNU coreutils, you can call du without -h to get sortable output, and produce human-friendly output with a little postprocessing. [131047600020] |This has the advantage of working even if your version of du doesn't have the -h flag. [131047600030] |If you want SI suffixes (i.e. multiples of 1000 rather than 1024), change 1024 to 1000 in the while loop body. [131047600040] |(Note that that 1000 in the condition is intended, so that you get e.g. 1M rather than 1000k.) [131047600050] |If your du has an option to display sizes in bytes (e.g. -b or -B 1 — note that this may have the side effect of counting actual file sizes rather than disk usage), add a space to the beginning of s (i.e. s=" kMGTEPYZ";), or add if (x<1000) {return x} else {x/=1024} at the beginning of the human function. [131047600060] |Displaying a decimal digit for numbers in the range 1–10 is left as an exercise to the reader. [131047610010] |Here's what I use on Ubuntu 10.04, CentOS 5.5, FreeBSD and Mac OS X. [131047610020] |I borrowed the idea from www.geekology.co.za/ and earthinfo.org, as well as the infamous ducks from "Linux Server Hacks" by O'Reilly. [131047610030] |I am still adapting it to my needs. [131047610040] |This is still a work in progress (As in, I was working on this on the train this morning.): [131047610050] |Here's the output: [131047620010] |Grub error 21 unless both sd card and live-usb are present [131047620020] |I'm not sure exactly how my EEE PC got to this point, but I have Backtrack (an Ubuntu-based distro) on the SD card, and a few different Ubuntu LiveUSB distros on different USB memory sticks. [131047620030] |If both the SD card and any of the USB sticks are present, the system will present a GRUB boot menu which seems to be the built-in EEE PC one--it has the "reset to factory defaults" option--but all options will boot Backtrack, even after claiming to reset to factory defaults. [131047620040] |If either the USB stick, the SD card, or both are missing, I get a Grub error 21 at boot. [131047620050] |Not sure quite how to un-hose this; I'd just like to put a conventional netbook Ubuntu on the built-in SSD, but I can't trivially do that from Backtrack 4. [131047630010] |It's a wrong setup of GRUB. [131047630020] |Reinstall it on ssd, using uuids for booting from SD and you'll be ok. [131047630030] |I recommend you to install grub2 though, because it handles these situations better. [131047640010] |How to connect two machines via wifi and no other hardware? [131047640020] |Without using any other hardware, I want to connect two machines that have wifi, so that I can transfer files between them, for example. [131047650010] |Take a look at setting up an Ad-hoc connect [131047660010] |How to generate initramfs image with busybox links? [131047660020] |Having been directed to initramfs by an answer to my earlier question (thanks!), I've been working on getting initramfs working. [131047660030] |I can now boot the kernel and drop to a shell prompt, where I can execute busybox commands, which is awesome. [131047660040] |Here's where I'm stuck-- there are (at least) two methods of generating initramfs images: [131047660050] |
  • By passing the kernel a path to a prebuilt directory hierarchy to be compressed
  • [131047660060] |
  • By passing the kernel the name of a file that lists the files to be included.
  • [131047660070] |The second method seemed a little cleaner, so I've been using that. [131047660080] |Just for reference, here's my file list so far: [131047660090] |Unfortunately, I have learned that busybox requires a long list of links to serve as aliases to all of its different commands. [131047660100] |Is there a way to generate the list of all these commands so I can add it to my file list? [131047660110] |Alternatively, I could switch to method 1, using the prebuilt directory hierarchy, but I'm not sure how to make the /dev nodes in that case. [131047660120] |Both of these paths seem messy. [131047660130] |Is there an elegant solution to this? [131047670010] |It's not the kernel that's generating the initramfs, it's cpio. [131047670020] |So what you're really looking for is a way to build a cpio archive that contains devices, symbolic links, etc. [131047670030] |Your method 2 uses usr/gen_init_cpio in the kernel source tree to build the cpio archive during the kernel build. [131047670040] |That's indeed a good way of building a cpio archive without having to populate the local filesystem first (which would require being root to create all the devices, or using fakeroot or a FUSE filesystem which I'm not sure has been written already). [131047670050] |All you're missing is generating the input file to gen_init_cpio as a build step. [131047670060] |E.g. in shell: [131047670070] |If you want to reflect the symbolic links to busybox that are present in your build tree, here's a way (I assume you're building on Linux): [131047670080] |Here's a way to copy all your symbolic links: [131047670090] |For busybox, maybe your build tree doesn't have the symlinks, and instead you want to create one for every utility that you've compiled in. [131047670100] |The simplest way I can think of is to look through your busybox build tree for .*.o.cmd files: there's one per generated command. [131047680010] |If you are in the busybox shell (ash) you don't need to worry about aliases as they will be run as commands by default IIRC. [131047680020] |Anyway busybox --help gives list of supported commands. [131047680030] |In my case they are: [131047680040] |In case of first method you create by mknod(1) command. [131047680050] |For example: [131047690010] |The first few lines of the initscript in my initramfs are simply: [131047690020] |Creates the symlinks for you.. [131047690030] |Only takes an unmeasurably small amount of time on my 500Mhz board, possibly longer on very low hardware, but likely manageable. [131047690040] |Saves a bunch of issues remembering to create all the right links when you update BB... [131047700010] |rsync time comparison - what's the precision of the Modified times comparison [131047700020] |I'm doing some synching with rsync using: [131047700030] |For my initial sync. [131047700040] |My idea, now is to use: [131047700050] |To synch files that have a new modified time. [131047700060] |My problem is that after the initial rsync, I see the following Modified times on the files that have been synced: [131047700070] |The modify time is 'effectively' the same, but only down to the second. [131047700080] |What's the resolution of the comparison here? [131047700090] |It seems that anything that is the same up to the second is considered the same, but I can't find any docs specifying this. [131047700100] |Anyone know off the top of their heads? [131047710010] |Here's me answering my own question: [131047710020] |rsync uses the utime() call which sets the modification time of a file down to 1 second resolution. [131047710030] |So, effectively, files that are the same up to the second, are considered the same for the time comparison piece of rsync's checks. [131047720010] |How to find out which package a file belongs to? [131047720020] |In the Debian family of OSes, dpkg --search /bin/ls gives: [131047720030] |That is, the file /bin/ls belongs to the Debian package named coreutils. (see this post if you are interested in a package containing a file that isn't installed) [131047720040] |What is the Fedora equivalent? [131047730010] |You can use rpm -qf /bin/ls to figure out what package your installed version belongs to: [131047730020] |Update: Per your comment, the following should work if you want only the name of the package (I just got a chance to test): [131047730030] |You can also use yum provides /bin/ls to get a list of all available repository packages that will provide the file: [131047740010] |How to strip a Linux system? [131047740020] |I've been building a Linux distro, and I've stripped the binaries, etc. [131047740030] |The system won't use GCC or development tools, as it will be a Chrome kiosk, so it would greatly help if I could strip down the system... [131047740040] |I was wondering, is there a way that I can delete all of the unused system files (like binaries, etc.) by watching what files/libraries are used during runtime? [131047740050] |Maybe another method is preferred, but is there a way to accomplish something like this? [131047750010] |There are programs like Bootchart that can be used to show what programs you ran during startup - you can probably keep it going after boot to see what's been invoked during a session. [131047750020] |A better solution may be to use remastering tools. [131047750030] |There are remastering tools for Fedora, Ubuntu, and others; you can use these to customize a distribution. [131047750040] |You might want to look at Tiny Core Linux. [131047750050] |There is a guy working on a remaster script for that as well. [131047760010] |Amongst other things... if you want to remove everything you don't need. [131047760020] |Make sure the filesystem has atime fully enabled, you can set this in /etc/fstab the current defaults is relatime but you want to use just atime. [131047760030] |Everytime a file is access the timestamp will get updated. [131047760040] |Then do some usage for a few days, to see which files have never had their atime updated. [131047760050] |I would do all of this in a vm, and very carefully. because I imagine there are a few files that are read when the system is in read-only mode. note: set it to noatime once you're ready for production, otherwise you'll do a write everytime you read, this is inefficient. [131047760060] |Though to be honest, I'd look at Damn Small Linux do you really need to be smaller than that? build based on their distro and simply remove the window manager and a few extra programs.... leave all the command line tools, that way if you ever need to repair or reload you have the shell. [131047770010] |Actively use your system for a while with file access times enabled. [131047770020] |See what files never have their access time modified. [131047770030] |These are candidates for deletion (but make sure there isn't a reason to keep them, e.g. because they're hardware drivers for hardware you don't have, or they're needed early in the boot process when the root partition is still mounted read-only). [131047770040] |Since you'll have few big applications, check what libraries are used by a single executable. [131047770050] |Consider linking them statically. [131047780010] |Where exactly are you starting from? [131047780020] |Are you stripping an existing distro? [131047780030] |Is there a reason you have to start with any distro? [131047780040] |You might want to consider building an embedded system from scratch and load only what you know you need. [131047790010] |assuming you are using Debian or it's derivatives [131047790020] |After some days of (heavy) usage, run popularity-contest. [131047790030] |It will display the oldest unused packages at the bottom. [131047790040] |Uninstall those, but with a watchful eye on whether or not there's stuff depending on them installed. [131047790050] |Here's a snippet of the output: [131047790060] |The colums mean atime, ctime, package-name, and file accessed. [131047800010] |What is the difference between the various tiling modes in kwin? and how do I use them? [131047800020] |There are 3 tiling modes in kde spiral, columns and floating. [131047800030] |What does each do? how do make them work for me? for example spiral seems to cut my screen in half then the next half another way. [131047800040] |Is it possible to adjust it so that it's like 2/3? [131047800050] |I don't understand how to make use of float. [131047800060] |Perhaps someone could explain what each is for (or one for each answer) and how they can be used and tuned. [131047810010] |Problem connecting to linksys wireless using zenwalk [131047810020] |Hello all, [131047810030] |I just recently installed zenwalk linux on my machine, and I am having trouble connecting to my wireless router. [131047810040] |I am using the appropriate options such as use encryption, WEP passphrase, the correct passphrase, and so on. [131047810050] |It sits there for a minute and gives me an error message of "connection failed: bad password". [131047810060] |I have tried everything possible in my power as far as troubleshooting goes. [131047810070] |I tried running the wireless configuration through terminal, but I don't believe I am skilled enough to do such a task, as I have not been successful thus far. [131047810080] |If anyone is able to help it is much appreciated. [131047810090] |Thanks [131047820010] |have you tried hooking your pc up to the router with an ethernet cable? at lease this way you can diagnose if the problem is with the wireless drivers/hardware or if it is at a higher level. [131047830010] |What is the difference between ioctl(), unlocked_ioctl() and compat_ioctl()? [131047830020] |Going through the linux 2.6.36 source code at lxr.linux.no, I could not find the ioctl() method in file_operations. [131047830030] |Instead I found two new calls: unlocked_ioctl() and compat_ioctl(). [131047830040] |What is the difference between ioctl(), unlocked_ioctl(), and compat_ioctl()? [131047840010] |Meta-answer: All the raw stuff happening to the Linux kernel goes through lkml (the Linux kernel mailing list). [131047840020] |For explicative summaries, read or search lwn (Linux weekly news). [131047840030] |Answer: From The new way of ioctl() by Jonathan Corbet: [131047840040] |ioctl() is one of the remaining parts of the kernel which runs under the Big Kernel Lock (BKL). [131047840050] |In the past, the usage of the BKL has made it possible for long-running ioctl() methods to create long latencies for unrelated processes. [131047840060] |Follows an explanation of the patch that introduced unlocked_ioctl and compat_ioctl. [131047840070] |The removal of the ioctl field happened a lot later. [131047840080] |Explanation: When ioctl was executed, it took the Big Kernel Lock (BKL), so nothing else could execute at the same time. [131047840090] |This is very bad on a multiprocessor machine, so there was a big effort to get rid of the BKL. [131047840100] |First, unlocked_ioctl was introduced. [131047840110] |It lets each driver writer choose what lock to use instead. [131047840120] |This can be difficult, so there was a period of transition during which old drivers still worked (using ioctl) but new drivers could use the improved interface (unlocked_ioctl). [131047840130] |Eventually all drivers were converted and ioctl could be removed. [131047840140] |compat_ioctl is actually unrelated, even though it was added at the same time. [131047840150] |Its purpose is to allow 32-bit userland programs to make ioctl calls on a 64-bit kernel. [131047840160] |The meaning of the last argument to ioctl depends on the driver, so there is no way to do a driver-independent conversion. [131047850010] |There are cases when the replacement of (include/linux/fs.h) struct file_operations method ioctl() to compat_ioctl() in kernel 2.6.36 does not work (e.g. for some device drivers) and unlocked_ioctl() must be used. [131047860010] |Rsync two file types in one command? [131047860020] |How to write those into one line, also without repeat the same path? [131047870010] |I'd write it like this: [131047880010] |(Note that the final / in /folder/remote/, and the placement of --exclude='*' after the include rules, are important.) [131047880020] |In shells that support brace expansion (e.g. bash, ksh, zsh): [131047880030] |Add --include='*/' --prune-empty-dirs if you want to copy files in subdirectories as well. [131047890010] |KDE [131047890020] |For users on Linux and Unix, KDE offers a full suite of user workspace applications which allow interaction with these operating systems in a modern, graphical user interface. [131047890030] |This includes Plasma Desktop, KDE's innovative desktop interface. [131047890040] |Other workspace applications are included to aid with system configuration, running programs, or interacting with hardware devices. [131047890050] |While the fully integrated KDE Workspaces are only available on Linux and Unix, some of these features are available on other platforms. [131047890060] |In addition to the workspace, KDE produces a number of key applications such as the Konqueror web browser, Dolphin file manager and Kontact, the comprehensive personal information management suite. [131047890070] |However, our list of applications includes many others, including those for education, multimedia, office productivity, networking, games and much more. [131047890080] |Most applications are available on all platforms supported by the KDE Development. [131047890090] |KDE also brings to the forefront many innovations for application developers. [131047890100] |An entire infrastructure has been designed and implemented to help programmers create robust and comprehensive applications in the most efficient manner, eliminating the complexity and tediousness of creating highly functional applications. [131047890110] |It is our hope and continued ambition that the KDE team will bring open, reliable, stable and monopoly-free computing to the everyday user. [131047900010] |http://kde.org [131047910010] |Simple Templating for Config Files [131047910020] |I need to manage a growing set of similar-but-different ASCII files. ( It so happens that they're Apache VirtualHosts, but that's not particularly relevant. ) For each production website, the developers might want as many as four variant configs for a variety of reasons. [131047910030] |Would you please recommend your favorite templating system? [131047910040] |I know of a few, M4 and Template::Toolkit come to mind. [131047910050] |My most important feature isn't power, it's intuitive operation and elegant simplicity. [131047920010] |I'd go for custom solution written in perl using e.g. Template::Toolkit. [131047920020] |That way you can keep your code elegant and simple. [131047930010] |libcanberra is failing to build [131047930020] |I got version 0.26 from libcanberra site, and running make gives: [131047930030] |Output of ./configure: [131047930040] |UPDATE: I'm no longer experiencing this problem and have no idea what fixed it. [131047940010] |You're probably missing the GTK-Doc tools to generate documentation. [131047940020] |One way to find out these dependencies by looking at what distributions do to build the package. [131047940030] |For example on Debian, in debian/control, the dependencies (except Debian-specific stuff) are [131047940040] |m4, libltdl-dev | libltdl7-dev (>= 2.2.6), libasound2-dev, libvorbis-dev, libgtk2.0-dev (>= 2.20), tdb-dev (> 1.1), gtk-doc-tools, libpulse-dev (>= 0.9.11), libgstreamer0.10-dev (>= 0.10.15) [131047950010] |Query DHCP server leases from Perl script [131047950020] |I have a Windows 2003 server and need to poll the DHCP lease information from it with a perl script that is running on a Ubuntu server. [131047950030] |Then I need to analyze &store the information in a mysql database. [131047950040] |Is there a way to query the leases from a perl script? [131047950050] |I can figure out how to process the info after I get it. [131047950060] |Thanks. [131047960010] |From Ubuntu 10.10, how do you connect to a Windows 7 share without a password setup? [131047960020] |I have: [131047960030] |
  • Ubuntu 10.10
  • [131047960040] |
  • Windows 7 in virtual box (let's call this vbox)
  • [131047960050] |
  • Then another Windows 7 machine (let's call this remote) in the network
  • [131047960060] |When I'm on vbox and browse (\ip.of.remote) I'm able to see the shared drives. [131047960070] |No password was setup, and none was asked. [131047960080] |When on Ubuntu, and I go to smb://ip.of.remote it asks for username/password. [131047960090] |How do I fix this? [131047960100] |Thanks! [131047960110] |UPDATE [131047960120] |
  • Jan 30 2011: I'm able to connect from Xubuntu to Windows (via Gigolo). [131047960130] |It allows me to press connect even if there's no username. [131047960140] |With Ubuntu though, if I remove the username, the connect button is grayed out. [131047960150] |Maybe then it's just an interface problem?
  • [131047970010] |You can try providing guest as username and no password. [131047970020] |It seems to me that sometimes Ubuntu forgets to try with the guest credentials. [131047980010] |[LFS] What is the most compatible tiny X server? [131047980020] |I've been building LFS/BLFS for about a month now, with multiple failures and almost successes, and I've just been informed that there exist Xorg-like window systems that are incredibly tiny, as Xorg's LFS build is over 200MB of just source packages. [131047980030] |I Googled around the web, but the Wikipedia article on TinyX pointed me to a nonexistent home page for a nice Xorg clone. [131047980040] |I'm looking to build a DSL-like distro (truthfully, it's a faster clone of ChromeOS), and I've got everything except a X server ready. [131047980050] |What I was looking for was the following: [131047980060] |
  • Something that's reasonably small, as I was hoping to get my distro down to 50MB when it is compressed.
  • [131047980070] |
  • Something that is fairly compatible with the normal X server (I don't know what I'm talking about, but I was hoping for something that works with any X application).
  • [131047980080] |
  • Something that will work fully (no hiccups!) with OpenBox or FluxBox (preferably OpenBox, as I've almost made my theme for it).
  • [131047980090] |
  • Something that works with Plymouth, as an epic boot screen make a bad operating system look good in the eyes of simple users.
  • [131047980100] |Also, as a side question, how do I package my final build? [131047980110] |I've build a small rendering system which I wish to distribute, but I can't figure out how to make an ISO out of it, like Ubuntu or DSL. [131047980120] |I know this is a lot, but I can't really find a better place to ask this sort of question. [131047980130] |Thank you very much! [131047990010] |Xfree86 (http://www.xfree86.org/) includes "tiny" X servers in their build. [131047990020] |I believe they are video-card-specific, in that there's an MGA server, and an ATI server, etc etc. [131047990030] |No loadable modules. [131047990040] |I have built XFree86 from source a coule of years ago (under Slackware 3.2!) but I don't think I tried the "tiny" servers to see if they worked. [131047990050] |The rest of the compile worked fine. [131047990060] |I tried XFree86 under a more modern (2.6.35, I think) Linux kernel and distro this summer, and it would not compile without significant source mods, some of which didn't seem at all clear how to do to me. [131047990070] |So, I can't say if Xfree86 would meet your needs or not. [131048000010] |How to get output from a remote shell [131048000020] |I have an application running on a server, started from the command line. [131048000030] |From time to time, I need to connect to the server via SSH and get the output messages written to stdout from the application. [131048000040] |Is there a way to read/sniff/catch the messages on that terminal? [131048000050] |The server runs Fedora 12. [131048000060] |To clarify a bit more. [131048000070] |I have admin access to the server, but I cannot stop the running application because it is in critical environment. [131048000080] |It is the end user who starts the application from a terminal. [131048000090] |Via ssh I have to read messages in the terminal whenever user sees strange behaviors. [131048000100] |It is a graphical application. [131048000110] |I may build a script to run the application with screen, this means I have to change the "launcher" but there are some servers not always accessible for security reason from outside the LAN. [131048000120] |Updating is complicate. [131048000130] |My question is, if there is an application running, is there any way to catch its output in the terminal without stopping it and rerun under screen or whatever? [131048010010] |The obvious solution would be to redirect the application's output to a file, and look at that file: [131048010020] |If the application must have its output in a terminal, run it in screen. [131048010030] |On the server: screen -S somename -Rrd application press Ctrl+A D to “detach” from the screen session, leaving it running in the background [131048010040] |From the client: ssh server screen -S somename -Rrd to reconnect to the screen session [131048010050] |If you want messages to be recorded automatically, the best way is to use the standard log facility. [131048010060] |You can arrange for log entries to be sent to other machines, either crudely with most basic syslogs, or with better filtering and dispatching options with rsyslog. [131048020010] |I think that in this case better than redirection output to file is redirect it to named pipe (fifo), because there is no need to store all data on disk. [131048020020] |If program produces a lot of output we could run out of disk space. [131048020030] |Instead of a conventional, unnamed, shell pipeline, a named pipeline makes use of the filesystem. [131048020040] |It is explicitly created using mkfifo() or mknod(), and two separate processes can access the pipe by name — one process can open it as a reader, and the other as a writer. [131048020050] |If you want to output it also to stdout you could use tee: [131048030010] |I am looking for a solution which does need to start server in a special way because I am not allow to start the program and cannot even stop and restart in a special environment. [131048030020] |Someone starts program on the server and I have to check message output in the terminal when there are troubles. [131048030030] |Any idea how to achieve? [131048040010] |"database disk image is malformed" [131048040020] |Running yum search something I get: [131048040030] |How to fix? [131048050010] |just try: [131048050020] |and enter your root pw. [131048060010] |When auto-completing in tcsh, can I reference a previous argument? [131048060020] |Hi all, [131048060030] |I'm trying to get some efficient auto-completing going here, and have hit upon a bit of a snag. [131048060040] |I've got a command for setting two things at once. [131048060050] |The first is a relatively small list, but the second, if not filtered by the first, is unmanageably huge. [131048060060] |What I want to be able to do is pass what's already been typed or auto-completed for the first argument to the second autocomplete command... [131048060070] |What I want to be able to do is pass the job that has already been entered for the first argument to the 'listTasks' command. [131048060080] |Any idea how I can do this? [131048060090] |Cheers [131048060100] |(this is a repost from an old SuperUser.com question of mine that nobody ever answered... [131048060110] |The SuperUser question can be found here) [131048070010] |Here's the best option I've been able to find: [131048070020] |It's relying on a variable called $COMMAND_LINE, which is available on my Ubuntu system, but I'm not sure if it's standard. [131048070030] |command invoked from ... version has additional environment variable set, the variable name is COMMAND_LINE and contains (as its name indicates) contents of the current (already typed in) command line. [131048070040] |One can examine and use contents of the COMMAND_LINE variable in her custom script to build more sophisticated completions (see completion for svn(1) included in this package). [131048070050] |Failing that, you would experiment with history expansions such as !! or !#$, but I'm not sure if that will work. [131048080010] |An easy bash completion tutorial? [131048080020] |I want to learn how to write bash completion scripts. [131048080030] |Which tutorial would you recommend for a newbie? [131048090010] |There aren't that many bash completion tutorials around, but this one is pretty good: [131048090020] |Introduction to Bash Completion [131048090030] |
  • Part 1 is for general knowledge
  • [131048090040] |
  • Part 2 covers creating scripts in /etc/bash_completion.d/
  • [131048100010] |I would start by looking at the library of bash completions already put together by the folks here: [131048100020] |http://bash-completion.alioth.debian.org/ [131048100030] |They also have a mailing list: [131048100040] |http://lists.alioth.debian.org/mailman/listinfo/bash-completion-devel [131048110010] |Screen, remote login failure, and disappearing text [131048110020] |When in a screen session via ssh, if I attempt to connect to another host via scp or ssh and the auth fails, any subsequent text I type in the terminal will not be displayed, however it is being entered and can be executed. [131048110030] |[user@host Oracle]$ scp user2@host2:/path/to/files . user2@host2's password: Permission denied, please try again. user2@host2's password: [user@host Oracle]$ [user@host Oracle]$ [user@host Oracle]$ [user@host Oracle]$ [user@host Oracle]$ [user@host ~]$ [user@host ~]$ [131048110040] |What you can't see above is that I did 'cd' on the last line. [131048110050] |It executed but output stays on the same line. ^C will give me a new line. [131048110060] |Is there a way to recover without losing my screen session? [131048120010] |stty sane, or more specifically stty echo, should turn echo back on. (stty sane will fix other terminal input or output oddities such as newlines not going back to the left margin.) [131048120020] |Ssh (and most other programs) turn echo off for the password prompt, i.e., the characters you type are not displayed (echoed) to the screen. stty -echo is a shell command with the same effect. [131048120030] |Normally echo should be turned back on (like stty echo) after the password prompt; this is a bug in either ssh or some other software at play here such as your system libraries or terminal emulator. [131048130010] |Batch importing .sql files [131048130020] |I have a bunch of .sql files in a directory that I need to import. [131048130030] |Although I can do it manually, for scripting purposes I need to be able to apply them in bulk. [131048130040] |How can I do that though? [131048130050] |What combination of options and commands do I need? [131048130060] |The mysqlimport command goes like this: [131048130070] |I need to be able to add all the text files to the end. [131048130080] |I would prefer a command than a convoluted for loop if possible. [131048130090] |Any suggestions? [131048140010] |Use wildcards: [131048140020] |If there are non-SQL files in that directory, the subset of files you do want to import may have some part of their file name in common. [131048140030] |For instance, if they all end in .sql, the command becomes: [131048140040] |If you come from DOS/Windows, it may not be clear to you why this works. [131048140050] |On Unixy systems, the shell expands wildcards, so the program (mysqlimport in this case) doesn't have to have its own processing. [131048140060] |That's why the usage message you quote says it expects the files to be given individually: that's how it will see the files if you use commands like the above. [131048140070] |The program only sees the wildcard if the pattern doesn't match anything; the shell passes it on literally to the program, having no better way of handling it. [131048150010] |What is Fedora's equivalent of deborphan? [131048150020] |An installed library that has no installed packages depending on it is referred to as orphaned. [131048150030] |One of the tools used to determine which libraries are orphaned on a Debian system is named deborphan. [131048150040] |What is the Fedora equivalent? [131048160010] |It appears that package-cleanup can perform a similar task with the following command: [131048160020] |or, to include more than just libraries [131048160030] |package-cleanup is in yum-utils package, which is available for installation via the Fedora repositories. [131048170010] |How do SO (shared object) numbers work? [131048170020] |I'm aware that shared objects under Linux use "so numbers", namely that different versions of a shared object are given different extensions, for example: [131048170030] |
  • example.so.1
  • [131048170040] |
  • example.so.2
  • [131048170050] |I understand the idea is to have two distinct files such that two versions of a library can exist on a system (as opposed to "DLL Hell" on Windows). [131048170060] |I'd like to know how this works in practice? [131048170070] |Often, I see that example.so is in fact a symbolic link to example.so.2 where .2 is the latest version. [131048170080] |How then does an application depending on an older version of example.so identify it correctly? [131048170090] |Are there any rules as to what numbers one must use? [131048170100] |Or is this simply convention? [131048170110] |Is it the case that, unlike in Windows where software binaries are transferred between systems, if a system has a newer version of a shared object it is linked to the older version automatically when compiling from source? [131048170120] |I suspect this is related to ldconfig but I'm not sure how. [131048180010] |Binaries themselves know which version of a shared library they depend on, and request it specifically. [131048180020] |You can use ldd to show the dependencies; mine for ls are: [131048180030] |As you can see, it points to e.g. libpthread.so.0, not just libpthread.so. [131048180040] |The reason for the symbolic link is for the linker. [131048180050] |When you want to link against libpthread.so directly, you give gcc the flag -lpthread, and it adds on the lib prefix and .so suffix automatically. [131048180060] |You can't tell it to add on the .so.0 suffix, so the symbolic link points to the newest version of the lib to faciliate that [131048190010] |The numbers in the shared libraries are convention used in Linux to identify the API of a library. [131048190020] |Typically the format is: [131048190030] |And as you noticed usually there is a symbolic link from libFOO.so to libFOO.MAJOR.MINOR.so [131048190040] |The MAJOR is typically incremented when the API changes (new entry points are removed or the parameters or types changed). [131048190050] |The MINOR typically is incremented for bug fix releases or when new APIs are introduced without breaking existing APIs. [131048190060] |The ldconfig command is responsible for creating the libFOO.so link to the latest version of libFOO.MAJOR.MINOR.so [131048190070] |A more extensive discussion can be found here: [131048190080] |http://www.ibm.com/developerworks/web/library/l-shlibs.html [131048200010] |libNAME.so is the filename used by the compiler/linker when first looking for a library specified by -lNAME. [131048200020] |Inside a shared library file is a field called the SONAME. [131048200030] |This field is set when the library itself is first linked into a shared object (so) by the build process. [131048200040] |This SONAME is actually what a linker stores in an executable depending on that shared object is linked with it. [131048200050] |Normally the SONAME is in the form of libNAME.so.MAJOR and is changed anytime the library becomes incompatible with existing executables linked to it and both major versions of the library can be kept installed as needed (though only one will be pointed to for development as libNAME.so) Also, to support easily upgrading between minor versions of a library, libNAME.so.MAJOR is normally a link to a file like libNAME.so.MAJOR.MINOR. [131048200060] |A new minor version can be installed and once completed, the link to the old minor version is bumped to point to the new minor version immediately upgrading all new executions to use the upgraded library. [131048200070] |Also, see my answer to Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work?? [131048210010] |What killed hotwire-shell? [131048210020] |It was supposed to be a more appealing shell than shell, with some hype some years back then and then... it died? [131048210030] |What happened? [131048220010] |Drag and drop between windows in tiling i.e. minimalist WMs? [131048220020] |Do any any minimal WMs, like scrotwm or xmonad (OR any others), support drag and drop between windows out of the box? [131048220030] |If so which? [131048220040] |If not, is there a way to enable such functionality? [131048220050] |A classic example would be to have a file manager in one window, from which you drag a file into an open application in another window to open it, etc. etc. [131048220060] |Thanks. [131048230010] |Awesome supports drag and drop between windows. [131048230020] |There is a catch: you can't change tags¹ while dragging, but you can show two tags at the same time (with all their windows) and then drag and drop. [131048230030] |¹something similar to a workspace, but more flexible [131048240010] |Im using dwm (5.8.2 atm) and when i try to drag n drop anything from program A to program B, it works. [131048240020] |You cant change workspace while draging files so you need to use the same workspace. [131048240030] |I just tried so it actually works ;)) [131048250010] |In X11, drag and drop is something that the application must support, it has nothing to do with the window manager. [131048250020] |For example: you cannot drag'n'drop anything in a xcalc window, even with the Compiz window manager. [131048250030] |The X11 drag and drop protocol is called XDND: see http://www.newplanetsoftware.com/xdnd/ for more information. [131048260010] |Converting syslog-ng 3.0? format to 3.2 format [131048260020] |Just rebooted my system to this warning [131048260030] |Anyone know of any good resources on converting formats? my syslog-ng.conf is primarily from the Gentoo Security Handbook and thus simply using the the .pacnew file won't work [131048260040] |here's my current conf file [131048270010] |Hi, Its probably related to this change in 3.2: [131048270020] |
  • syslog-ng traditionally expected an optional hostname field even when a syslog message is received on a local transport (e.g. /dev/log). [131048270030] |However no UNIX version is known to include this field. [131048270040] |This caused problems when the application creating the log message has a space in its program name field. [131048270050] |This behaviour has been changed for the unix-stream/unix-dgram/pipe drivers if the config version is 3.2 and can be restored by using an explicit 'expect-hostname' flag for the specific source.
  • [131048270060] |You receive the warning because you use the unix-stream("/dev/log"); in your source. [131048270070] |If you don't experience any problems with your local logs, there is nothing else to do except changing the first line to @version: 3.2 [131048270080] |If your distro adds the hostname to log messages coming from /dev/log (which they rarely do), then include flags(expect-hostname) in the source. [131048270090] |Regards, [131048270100] |Robert Fekete syslog-ng documentation maintainer [131048280010] |Is there some sort of PDF to text -converter? [131048280020] |I need PDF files to text so I can search over them in bulk from commandline. [131048280030] |Is there some converter for Ubuntu, OBSD or similar distro? [131048280040] |Perhaps related post, OCR with ubuntu here. [131048290010] |pdftotext is likely what you are looking for: http://en.wikipedia.org/wiki/Pdftotext unless the text you want to extract is really under a graphical form, which is not that common with pdf documents. [131048300010] |You have a lot of options! [131048300020] |pdftotext from poppler has already been mentioned. [131048300030] |There's a Haskell program called pdf2line which works well. [131048300040] |calibre's ebook-convert commandline program (or calibre itself) is another option; it can convert PDF to plain text, or other ebook-format (RTF, ePub), in my opinion it generates better results than pdftotext, although it is considerably slower. [131048300050] |ebook-convert file.pdf file.txt [131048300060] |AbiWord can convert between any formats it knows from the command-line, and at least optionally has a PDF import plugin: [131048300070] |abiword --to=txt file.pdf [131048300080] |Yet another option is podofotextextract from the podofo PDF tools library. [131048300090] |I haven't really tried that. [131048300100] |If you combine the two Ghostscript tools, pdf2ps and ps2ascii, you have yet another option. [131048300110] |I can actually think of a few more methods, but I'll leave it at that for now. ;) [131048310010] |You can convert PDFs to text on the command line with pdftotext (Ubuntu: poppler-utils ; OpenBSD: xpdf-utils package). [131048310020] |You can use Recoll (Ubuntu: recoll ; OpenBSD: no port, but there's one for FreeBSD.) to search inside various formatted text document types, including PDF. [131048310030] |There's a GUI, and it builds an index automatically under the hood. [131048310040] |It uses pdftotext to convert PDF to text. [131048310050] |Acrobat Reader (at least version 9 under Linux) has a limited multiple-file search capability (you can search in all the files in a directory). [131048320010] |How to ensure the bluetooth is switched off after boot up? [131048320020] |Please show me which magic button to press for the case where I'm tired of manually switching off the distracting bluetooth light on my laptop after every boot-up. [131048330010] |Note: I am unable to test this answer. [131048330020] |Assuming that you want to shut off bluetooth and not just the indicator light, the rfkill utility does what you want. [131048330030] |The following command should disable bluetooth: [131048330040] |In order to do this on every boot, this line can be placed in /etc/rc.local, another custom init script, or (if available) an upstart script. [131048330050] |I recommend using the full path of the executable inside /etc/rc.local or in an custom init script. [131048330060] |On my system this is /sbin/rfkill, but can be found using the command which rfkill. [131048330070] |Thus on my system, I would place the following command within /etc/rc.local somewhere before exit 0: [131048330080] |Depending on your Debian setup, you may not have /etc/rc.local. [131048330090] |In this case, a custom init script may be the way to go. [131048330100] |The init script could be saved at /etc/init.d/disable-bluetooth and contain something like: [131048330110] |Then ensure the command is executable (chmod 755) and add it to startup (update-rc.d disable-bluetooth defaults). [131048330120] |An example of an upstart upstart script would be a file named /etc/init/disable-bluetooth.conf containing something like: [131048330130] |rfkill uses /dev/rfkill which is an interface provided by the Linux kernel. [131048340010] |`p` key doesn't work in X [131048340020] |Today I had to force-shutdown my machine after it froze during resume from suspend. [131048340030] |Since the reboot, I've found that the p key doesn't work normally in X. It does work normally in the console. [131048340040] |Modified keypresses, e.g. shift-p, ctrl-p, do work normally. [131048340050] |Pressing p with xev running gives [131048340060] |Could this problem be happening because of file corruption? [131048340070] |What file would I check for corruption? [131048340080] |I've done an fsck on the system drive —by running tune2fs -C 200 /dev/sda3 before rebooting— which seems to have come up clean. [131048340090] |I.E. [131048340100] |I'm running an updated (last dist-upgrade done yesterday) ubuntu 10.10. [131048350010] |I've realized that this was happening because of a typo I made when manually editing my xfce keyboard shortcuts file. [131048350020] |Specifically, the file ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml used the modifier Meta5 (which doesn't exist) instead of Mod5 to modify the p key. [131048350030] |I did note that no errors were recorded in ~/.xsession-errors, despite the fact that xfce seems to register things there. [131048350040] |It may be useful to some people to note that one of my reasons for editing the file was in order to make the same shortcuts work with or without the Keyboard Layouts applet being loaded. [131048350050] |Depending on whether or not that applet is loaded, the "windows" key will register as either or . [131048360010] |Prevent mplayer from changing system volume [131048360020] |When I change the volume in mplayer it also changes for other applications. [131048360030] |How can I configure mplayer to only change its own volume? [131048360040] |Or is this a problem with the rest of the audio stack? [131048360050] |I am using alsa with esd. [131048370010] |mplayer takes a -softvol flag that makes it use the software audio mixer instead of the sound card. [131048370020] |If you want it on permanently, you can add the following to ~/.mplayer/config: [131048380010] |How can I tweak the kernel for total swap out? [131048380020] |I would like to deploy the following swapping policy: [131048380030] |
  • By default all pages in memory should also be in swap space.
  • [131048380040] |
  • When a page in memory is changed (i.e. dirty), the page should be written out as soon as possible, but with lower priority than other processes.
  • [131048380050] |
  • if a certain configurable watermark is reached, (let's say 80% of pages are dirty), the priority will be equal as other processes.
  • [131048380060] |Is this kind of swapping policy possible with the linux kernel? [131048380070] |If so, how do I set the kernel settings to achieve this? [131048380080] |Edit: [131048380090] |Obviously the reason for this is to reduce the number of pages that need to be swapped out. [131048380100] |Only dirty pages need to be written to disk, and this happens in the background over time. [131048380110] |Therefore when page misses occur (i.e. the page is not in memory), there is no need to write any pages from memory to disk, but only from disk to memory. [131048380120] |Therefore it reduced the probability of i/o bottlenecks because both swapping in and swapping out try to access the disk simultaneously. [131048390010] |Increase your RAM. [131048390020] |RAM is inexpensive. [131048390030] |A rule of thumb is, if your swapping at all, you have a problem. [131048390040] |If your application needs a lot of RAM. [131048390050] |Consider re-writing the application and spreading it across cores/systems instead. [131048400010] |Just because your system is swapping, does not mean you have a problem. [131048400020] |There are applications that are finely tuned to take great advantage of swap without hindering performance of the system. [131048400030] |Most relational database systems are tuned this way: IE: Oracle and Cache, probably being the biggest two. [131048400040] |If you use hibernation, it uses swap space for the storage of RAM. [131048400050] |When booting the system back up, everything in swap is added back to RAM. [131048400060] |This way, you can power down your system without chewing through the battery like standby, and still get back to where you left off before power down. [131048400070] |As a result, your battery will last much longer. [131048400080] |Swapping can be a great thing, because it frees up more of your active RAM, to keep the performance of your system high. [131048400090] |When your active RAM is filled AND your swap is filled, and you still need more room, then and only then, do you have a problem. [131048400100] |Until that point, swap is here to help you, not hurt you. [131048410010] |You can set the value of /proc/sys/vm/swappiness to control the ratio of swapping vs keeping things in memory. [131048410020] |A value of 0 completely avoids swapping at all costs. [131048410030] |This can be done using either: [131048410040] |
  • echo 0 >/proc/sys/vm/swappiness
  • [131048410050] |
  • sysctl -w vm.swappiness=0
  • [131048410060] |
  • Storing that setting in /etc/sysctl.conf
  • [131048410070] |Generally, using just a little swap is not a bad thing. [131048410080] |Free memory can be used for caching data read from disk, and the system can plan ahead for a sudden need of lots of memory by an application. [131048410090] |When too much programs are swapped however, there is a lot of disk related activity during every program switch which really makes everything slow down. [131048410100] |Before something can be used, it needs to be loaded back into memory. [131048410110] |Disks reads are horribly slower compared to memory access. [131048410120] |Like waiting for an hour before the data arrives. [131048410130] |The system has to schedule the read between the other read/write requests, the drive starts making attempts to find the right cylinder, and finally starts delivering data.. slowly. [131048410140] |Hence, I think your logic is flawed. [131048410150] |Generally, you want to keep programs running in memory, and keeping enough room for sudden growth. [131048410160] |Not using the swap overly to "write things to disk". [131048410170] |It's neither a backup nor a performance improvement. [131048410180] |Old computers with less memory suffered from swapping problems. [131048410190] |With a lot of programs open everything was slow, and you could hear the disk reading and writing all the time. [131048410200] |To the swap file. [131048420010] |Quoting in ssh $host $FOO and ssh $host "sudo su user -c $FOO" type constructs [131048420020] |I often end up issuing complex commands over ssh; these commands involve piping to awk or perl one-lines, and as a result contain single quotes and $'s. [131048420030] |I have neither been able to figure out a hard and fast rule to do the quoting properly, nor found a good reference for it. [131048420040] |For instance, consider the following: [131048420050] |(Note the extra quotes in the awk statement.) [131048420060] |But how do I get this to work with, e.g. ssh $host "sudo su user -c '$CMD'"? [131048420070] |Is there a general recipe for managing quotes in such scenarios?.. [131048430010] |How about using more double quotes? [131048430020] |Then your ssh $host $CMD should work just fine with this one: [131048430030] |CMD="pgrep -fl java | grep -i datanode | awk '{print $1}'" [131048430040] |Now to the more complex one, the ssh $host "sudo su user -c \"$CMD\"". [131048430050] |I guess all you have to do is escape sensitive characters in CMD: $, \ and ". [131048430060] |So I'd try and see if this works: echo $CMD | sed -e 's/[$\\"]/\\\1/g'. [131048430070] |If that looks OK, wrap the echo+sed into a shell function, and you are good to go with ssh $host "sudo su user -c \"$(escape_my_var $CMD)\"". [131048440010] |Dealing with multiple levels of quoting (really, multiple levels of parsing/interpretation) can get complicated. [131048440020] |It helps to keep a few things in mind: [131048440030] |
  • Each “level of quoting” can potentially involve a different language.
  • [131048440040] |
  • Quoting rules vary by language.
  • [131048440050] |
  • When dealing with more than one or two nested levels, it is usually easiest to work “from the bottom, up” (i.e. innermost to outermost).
  • [131048440060] |

    Levels of Quoting

    [131048440070] |Let us look at your example commands. [131048440080] |Your first example command (above) uses four languages: your shell, the regex in pgrep, the regex in grep (which might be different from the regex language in pgrep), and awk. [131048440090] |There are two levels of interpretation involved: the shell and one level after the shell for each of the involved commands. [131048440100] |There is only one explicit level of quoting (shell quoting into awk). [131048440110] |Next you added a level of ssh on top. [131048440120] |This is effectively another shell level: ssh does not interpret the command itself, it hands it to a shell on the remote end (via (e.g.) sh -c …) and that shell interprets the string. [131048440130] |Then you asked about adding another shell level in the middle by using su (via sudo, which does not interpret its command arguments, so we can ignore it). [131048440140] |At this point, you have three levels of nesting going on (awk → shell, shell → shell (ssh), shell → shell (su user -c), so I advise using the “bottom, up” approach. [131048440150] |I will assume that your shells are Bourne compatible (e.g. sh, ash, dash, ksh, bash, zsh, etc.). [131048440160] |Some other kind of shell (fish, rc, etc.) might require different syntax, but the method still applies. [131048440170] |

    Bottom, Up

    [131048440180] |
  • Formulate the string you want to represent at the innermost level.
  • [131048440190] |
  • Select a quoting mechanism from the quoting repertoire of the next-highest language.
  • [131048440200] |
  • Quote the desired string according to your selected quoting mechanism. [131048440210] |
  • There are often many variations how to apply which quoting mechanism. [131048440220] |Doing it by hand is usually a matter of practice and experience. [131048440230] |When doing it programatically, it is usually best to pick the easiest to get right (usually the “most literal” (fewest escapes)).
  • [131048440240] |
  • Optionally, use the resulting quoted string with additional code.
  • [131048440250] |
  • If you have not yet reached your desired level of quoting/interpretation, take the resulting quoted string (plus any added code) and use it as the starting string in step 2.
  • [131048440260] |

    Quoting Semantics Vary

    [131048440270] |The thing to keep in mind here is that each language (quoting level) may give slightly different semantics (or even drastically different semantics) to the same quoting character. [131048440280] |Most languages have a “literal” quoting mechanism, but they vary in exactly how literal they are. [131048440290] |The single quote of Bourne-like shells is actually literal (which means you can not use it to quote a single quote character itself). [131048440300] |Other languages (Perl, Ruby) are less literal in that they interpret some backslash sequences inside single quoted regions non-literally (specifically, \\ and \' result in \ and ', but other backslash sequences are actually literal). [131048440310] |You will have to read the documentation for each of your languages to understand its quoting rules and the overall syntax. [131048440320] |

    Your Example

    [131048440330] |The innermost level of your example is an awk program. [131048440340] |You are going to embed this in a shell command line: [131048440350] |We need to protect (at a minimum) the space and the $ in the awk program. [131048440360] |The obvious choice is to use single quote in the shell around the whole program. [131048440370] |
  • '{print $1}'
  • [131048440380] |There are other choices though: [131048440390] |
  • {print\ \$1} directly escape the space and $
  • [131048440400] |
  • {print' $'1} single quote only the space and $
  • [131048440410] |
  • "{print \$1}" double quote the whole and escape the $
  • [131048440420] |
  • {print" $"1} double quote only the space and $ This may be bending the rules a bit (unescaped $ at the end of a double quoted string is literal), but it seems to work in most shells.
  • [131048440430] |If the program used a comma between the open and close curly braces we would also need to quote or escape either the comma or the curly braces to avoid “brace expansion” in some shells. [131048440440] |We pick '{print $1}' and embed it in the rest of the shell “code”: [131048440450] |Next, you wanted to run this via su and sudo. [131048440460] |su user -c … is just like some-shell -c … (except running under some other UID), so su just adds another shell level. sudo does not interpret its arguments, so it does not add any quoting levels. [131048440470] |We need another shell level for our command string. [131048440480] |We can pick single quoting again, but we have to give special handling to the existing single quotes. [131048440490] |The usual way looks like this: [131048440500] |There are four strings here that the shell will interpret and concatenate: the first single quoted string (pgrep … awk), an escaped single quote, the single-quoted awk program, another escaped single quote. [131048440510] |There are, of course many alternatives: [131048440520] |
  • pgrep\ -fl\ java\ \|\ grep\ -i\ datanode\ \|\ awk\ \'{print\ \$1} escape everything important
  • [131048440530] |
  • pgrep\ -fl\ java\|grep\ -i\ datanode\|awk\ \'{print\$1} the same, but without superfluous whitespace (even in the awk program!)
  • [131048440540] |
  • "pgrep -fl java | grep -i datanode | awk '{print \$1}'" double quote the whole thing, escape the $
  • [131048440550] |
  • 'pgrep -fl java | grep -i datanode | awk '"'"'{print \$1}'"'" your variation; a bit longer than the usual way due to using double quotes (two characters) instead of escapes (one character)
  • [131048440560] |Using different quoting in the first level allows for other variations at this level: [131048440570] |
  • 'pgrep -fl java | grep -i datanode | awk "{print \$1}"'
  • [131048440580] |
  • 'pgrep -fl java | grep -i datanode | awk {print\ \$1}'
  • [131048440590] |Embedding the first variation in the sudo/*su* command line give this: [131048440600] |You could use the same string in any other single shell level contexts (e.g. ssh host …). [131048440610] |Next, you added a level of ssh on top. [131048440620] |This is effectively another shell level: ssh does not interpret the command itself, but it hands it to a shell on the remote end (via (e.g.) sh -c …) and that shell interprets the string. [131048440630] |The process is the same: take the string, pick a quoting method, use it, embed it. [131048440640] |Using single quotes again: [131048440650] |Now there are eleven strings that are interpreted and concatenated: 'sudo su user -c ', escaped single quote, 'pgrep … awk ', escaped single quote, escaped backslash, two escaped single quotes, the single quoted awk program, an escaped single quote, an escaped backslash, and a final escaped single quote. [131048440660] |The final form looks like this: [131048440670] |This is a bit unwieldy to type by hand, but the literal nature of the shell’s single quoting makes it easy to automate a slight variation: [131048450010] |See Chris Johnsen's answer for a clear, in-depth explanation with a general solution. [131048450020] |I'm going to give a few extra tips that help in some common circumstances. [131048450030] |Single quotes escape everything but a single quote. [131048450040] |So if you know the value of a variable doesn't include any single quote, you can interpolate it safely between single quotes in a shell script. [131048450050] |If your local shell is ksh93 or zsh, you can cope with single quotes in the variable by rewriting them to '\''. [131048450060] |(Although bash also has the ${foo//pattern/replacement} construct, its handling of single quotes doesn't make sense to me.) [131048450070] |Another tip to avoid having to deal with nested quoting is to pass strings through environment variables as much as possible. [131048450080] |Ssh and sudo tend to drop most environment variables, but they're often configured to let LC_* through, because these are normally very important for usability (they contain locale information) and are rarely considered security sensitive. [131048450090] |Here, since LC_CMD contains a shell snippet, it must be provided literally to the innermost shell. [131048450100] |Therefore the variable is expanded by the shell immediately above. [131048450110] |The innermost-but-one shell sees "$LC_CMD", and the innermost shell sees the commands. [131048450120] |A similar method is useful to pass data to a text processing utility. [131048450130] |If you use shell interpolation, the utility will treat the value of the variable as a command, e.g. sed "s/$pattern/$replacement/" won't work if the variables contain /. [131048450140] |So use awk (not sed), and either its -v option or the ENVIRON array to pass data from the shell (if you go through ENVIRON, remember to export the variables). [131048460010] |What is Wayland? [131048460020] |I was looking for a lightweight X server, but failed to find one. [131048460030] |Then I found out about Wayland. [131048460040] |I says that it aims to coexist with X, but can run standalone. [131048460050] |When I try to compile it, it needs Mesa, which needs X. [131048460060] |What exactly is Wayland? [131048470010] |http://wayland.freedesktop.org/faq.html [131048480010] |Wayland is an experimental new display server. [131048480020] |It is not an X server, and to run X applications you will need to run an X server with it (see the bottom diagram on Wayland Architecture). [131048480030] |Since there are very few Wayland applications so far, this means you really can't use it to replace X yet. [131048490010] |Searching a text file with a single line using regular expressions. [131048490020] |Hi, as far as i know all unix text processing utilities are reading one line at a time and performing one action on this line. [131048490030] |I have a huge file with a single line of text which contains several tokens im concerned with. [131048490040] |You can think of the content of the file as something like this: xzxzxzzxzxAxzzBxzxCzxxzxxzxzzxzxzAzBzxxxxzzCzxzxzxzxzxxzz [131048490050] |I want to get the two strings between (A and B) and (B and C) for every occurrence of A.*B.*C. [131048490060] |In this example my desired output would be this: [131048490070] |xzz xzx [131048490080] |z zxxxxzz [131048490090] |How do i do this? [131048490100] |edit: sorry, i didn't make it clear. [131048490110] |A, B and C are long Strings that can only be identified by regular expressions. [131048500010] |If the line can fit into memory, then repeated use of the split function from Perl would work. [131048500020] |Otherwise, I would read the file in blocks (with the Perl sysread function) and process each block individually as above -- allowing for stings of interest to cross block boundaries. [131048510010] |I'm sure there are many interesting answers using awk,perl,sed, and others. [131048510020] |Here is a rather simplistic options that uses tr to turn this problem back into one a problem that we know how to solve--finding a pattern within a line: [131048510030] |The tr 'C' '\n' command translates any "C" in the input into a newline character. [131048510040] |Thus, it is then necessary to just pipe it into a command that will output the text between A and B and between B and the end of the line. [131048510050] |If A, B, and C are regular expressions rather than simple characters, try: [131048510060] |This uses the same basic idea, but uses sed to create the newlines. [131048520010] |Awk generalizes the notion of lines to record, which can be terminated by any character. [131048520020] |Several implementations, such as Gawk, support an arbitrary regular expression as the record separator. [131048520030] |Untested: [131048530010] |How can I customize the look of Thunderbird's mailbox list, message list, and headers? [131048530020] |Mozilla Thunderbird's message list, mailbox list, and headers use a font size that is so large, I can barely see any content in the message preview pane. [131048530030] |I'd like to reduce these to 10px, and reduce the headers to 8px or less. [131048530040] |How can I do this? [131048540010] |Thunderbird is written is XUL. [131048540020] |It's Mozilla's markup language, and it's powered by XULRunner. [131048540030] |Basically, it's GUI-oriented XML. [131048540040] |The thing that styles the whole application is actually just a simple .css file. [131048540050] |If you find it, you can then find the elements you are looking for and just tweak the CSS. [131048540060] |I'll post back the required path to the file and the rules to be tweaked. [131048550010] |Blender's answer pointed me in the right direction. [131048550020] |I didn't actually modify those files, but what I did instead was created a file ~/.mozilla-thunderbird/iddbnhwr.default/chrome/userChrome.css and I put my changes in there. [131048550030] |I made mine look like this: [131048550040] |Analyzing the files from Blender's answer showed me that the following are the CSS selectors I wanted: [131048550050] |
  • #folderTree - The list of folders on the left hand side
  • [131048550060] |
  • #threadTree - The list of messages on the top right.
  • [131048550070] |
  • #msgHeaderView - The header pane at the top of every message preview / viewer window
  • [131048550080] |There's a lot more interesting stuff in those files: [131048550090] |
  • #mailContent - Looks like the body of mail messages?
  • [131048550100] |
  • #folderUnreadCol, #folderTotalCol, #folderSizeCol, #folderNameCol - Self explanatory
  • [131048550110] |
  • treecol.flagColumnHeader - Looks like you could change the flag icon to something else... [131048550120] |Maybe an upvote icon? ;-)
  • [131048550130] |
  • treecol.junkStatusHeader - Same for junk icon. [131048550140] |Just change the list-style-image: url(...) rule.
  • [131048560010] |How to pass the output of one command as the command-line argument to another? [131048560020] |So I have a script that, when I give it two addresses, will search two HTML links: [131048560030] |I want to send this to wget and then save the output in a file called temp.html. [131048560040] |I tried this, but ut doens't work. [131048560050] |Can someone explain why and/or give me a solution please? [131048570010] |you're not actually executing your url line : [131048580010] |wget also accepts stdin with the - switch. [131048580020] |If you want to save the output in a file, use the -O switch. [131048590010] |You can use backticks (`) to evaluate a command and substitute in the command's output, like: [131048590020] |In your case: [131048600010] |You could use "xargs". [131048600020] |A trivial example: [131048600030] |You would have to take care that xargs doesn't split its stdin into two or more invocations of the comman ("cat" in the example above). [131048610010] |It seems you could use a combination of the answers here. [131048610020] |I'm guessing you are wanting to replace space chars with their escaped ascii values in the url. [131048610030] |To do this, you need to replace them with "%20", not just "%". [131048610040] |Here's a solution that should give you a complete answer: [131048610050] |The backticks indicate that the enclosed command should be interpreted first, and the result sent to wget. [131048610060] |Notice I escaped the space and % chars in the sed command to prevent them from being misinterpreted. [131048610070] |The -q option for wget prevents processing output from the command being printed to the screen (handy for scripting when you don't care about the in-work status) and the -O option specifies the output file. [131048610080] |FYI, if you don't want to save the output to a file, but just view it in the terminal, use "-" instead of a filename to indicate stdout. [131048620010] |View Script Over SSH? [131048620020] |A friend, using a remote machine, SSHed to my machine and ran the following python script: [131048620030] |while (1): [131048620040] |....print "hello world" [131048620050] |(this script simply prints 'hello world' continuously). [131048620060] |I am now logged in to my machine. [131048620070] |How can I see the output of the script my friend was running? [131048620080] |if it helps, I can 'spot' the script my friend is using: [131048620090] |me@home:~$ ps aux | grep justprint.py [131048620100] |friend 7494 12.8 0.3 7260 3300 ? [131048620110] |Ss 17:24 0:06 python TEST_AREA/justprint.py [131048620120] |friend 7640 0.0 0.0 3320 800 pts/3 S+ 17:25 0:00 grep --color=auto just [131048620130] |what steps should I take in order to view the "hello world" messages on my screen? [131048630010] |You generally can't see the output of anther person's program. [131048630020] |See over in that column where your grep command is running on tty pts/3, and your friend's is ?, which means it's detached from the terminal. [131048630030] |You could see where the output is going with ls -l /proc/7494/fd/ (where 7494 is the process ID of your friend's process) — although if you're not running as root, you probably can't even look, for security reasons. [131048630040] |(So try sudo ls -l /proc/7494/fd/.) [131048630050] |There are horrible, horrible, kludgy things you might be able to do to change where the output of the program goes. [131048630060] |But in general, you can't and shouldn't. [131048630070] |If your friend wants to share the output with you, and approach would be to redirect the output of the program to a file, and then make that file readable by you: [131048630080] |(Where in this case "readable by you" is "readable by everyone"; with a little more work you can set up a shared group so just the two of you can exchange output.) [131048630090] |(And be aware that python buffers output by default — turning that off is what the -u is for.) [131048640010] |If you have root access on the machine and your friend is willing to execute some commands, it is possible: [131048640020] |
  • [Root] screen has to be setuid root: chmod u+s /usr/bin/screen
  • [131048640030] |
  • [Friend] starts screen, he can give the session a name, makes it easier: screen -S "shared_session"
  • [131048640040] |
  • [Friend] enables multiuser mode: Ctrl-a :multiuser on
  • [131048640050] |
  • [Friend] enables you to access his screen session: Ctrl-a :acladd you
  • [131048640060] |
  • [Friend] (Optional) restrictes your access to read-only: Ctrl-a :aclchg you -w "#"
  • [131048640070] |
  • [You] attach onto friends screen: screen -x friend/shared_session
  • [131048640080] |The setuid root is maybe a bit dangerous... [131048650010] |Keep SSH Sessions running after disconnection. [131048650020] |I sometimes have long running processes that I want to kick off before going home, so I create a SSH session to the server to start the process, but then I want to close my laptop and go home and later, after dinner, I want to check on the process that I started before leaving work. [131048650030] |How can I do that with SSH? [131048650040] |My understanding is that if you break your SSH connection you will also break your login session on the server, therefore killing the long running process... [131048660010] |You want to be using GNU Screen. [131048660020] |It is super awesome! [131048660030] |CTRL+a , d to detatch from your screen session [131048660040] |When you come back to your laptop: [131048660050] |Then check out the progress of your long-running process! [131048660060] |screen is a very comprehensive tool, and can do a lot more than what I've described. [131048660070] |While in a screen session, try ctrl+a,? to learn a few common commands. [131048660080] |Probably the most common are: [131048660090] |
  • CTRL+a , c to create a new window
  • [131048660100] |
  • CTRL+a , n to switch to the next window in your screen session
  • [131048660110] |
  • CTRL+a , p to switch to the previous window in your screen session
  • [131048660120] |
  • if you log in from a bunch of different systems, you may have accidentally left yourself attached to an active screen session on a different computer. for that reason, I always resume with screen -d -r to ensure that if another shell is attached to my screen session, it will be detached before I resume it on my current system.
  • [131048680010] |What you want to use is screen or even better a user-friendly wrapper around screen called byobu. [131048680020] |Screen allows you to run multiple virtual terminal sessions in the same ssh session. [131048680030] |A tutorial and help pages are available. [131048680040] |byobu is a wrapper that allows to easily open new screens with a simple function key instead of key combination from ctrl-a. [131048680050] |It also shows a status line with all the open virtual terminals which can be named. [131048680060] |Another nice feature is the fact that all your screen can stay up while your ssh connection is disconnected. [131048680070] |You just connect again via ssh and call byobu and everything is like before. [131048680080] |At last some screenshots of byobu. [131048690010] |It might be worth noting that [131048690020] |ssh -t lala screen -rxU moo will attach to the moo session on host lala [131048690030] |ssh -t lala screen -S moo will create the moo session on host lala [131048690040] |and [131048690050] |ssh -t lala screen -S moo quux will create the moo session on host lala and run the program quux, quitting the session on completion. [131048700010] |How to add SLED 11 system to windows domain [131048700020] |Hello, [131048700030] |I have a small Windows domain, contains DHCT, DNS, Active Directory, CA etc. [131048700040] |I'm trying to connect a SLED 11 to the domain according to found-by-Google-search guides but all what I've found was irrelevant and not up to date. [131048700050] |Can someone please link me to an up to date, clear one? [131048700060] |(What I want to be able to do is to log-on to the SUSE machine with a Windows active directory user, and to mount Windows shared folders). [131048700070] |Thank you very much!! [131048710010] |I don't know about details for Suse, but we've been doing this on gentoo and ubuntu with a combination of kerberos, ldap, and nss. [131048710020] |I'll have to look for a reference. [131048710030] |I don't think we have any external documentation. [131048720010] |I don't have a complete howto, but first look up Samba's docs on setting up a Domain Member and joining it to the domain. [131048720020] |At some point you will run a command like "net ads join -U Administrator" You must have a smb.conf file with security = ads and workgroup = domain.local. [131048720030] |You will probably also need the MIT Kerberos packages installed at some point. [131048720040] |Once Samba on the workstation has successfully joined ADS, you can set up PAM and NSS modules to use Samba for login and user listing. [131048720050] |I think it's pam_winbind and nss_winbind you will need installed, but not 100% sure. [131048720060] |Take a look at the Samba HOW-TO about domain membership: http://samba.org/samba/docs/man/Samba-HOWTO-Collection/domain-member.html Hope this helps! [131048730010] |How to use diff tool to take a diff between directories with libraries and source? [131048730020] |I would like to know if i can use the diff tool to take the difference between two folders which contain sources as well as libraries and executable. [131048730030] |Right now i gave [131048730040] |diff -Naur dir1/ dir2/ [131048730050] |It is comparing the executables within the directories and is dumping junk values. [131048730060] |Is there any way to remove the libraries and executable while comparing these directories. [131048740010] |Don't use the -a switch. [131048740020] |This will only output 'Files dir1/afile and dir2/afile differ' on binary files. [131048750010] |Discovering the number of running and blocked processes in Mac OS X [131048750020] |In Linux, I could discover the number of running and blocked processes running [131048750030] |and [131048750040] |Is there a way to do it in Mac OS X? [131048750050] |Preferably, a way which I could use in a shell script? [131048760010] |Well, the humane way is to use top. [131048760020] |Here is some output: [131048760030] |It's not very script friendly. [131048760040] |Here's ps aux: [131048760050] |Try playing with those. [131048760060] |I'm not sure what blocked processes are, but these commands should help. [131048760070] |Good luck! [131048770010] |Building on Blender's answer, to get the number of running processes the following can be used: [131048770020] |To get the number of processes in Uninterruptible Sleep you can use(Edit Changed 'D' to 'U', thanks Gilles!): [131048780010] |How do I type the degree symbol under X11 (using a default English keyboard layout)? [131048780020] |I'm trying to put in an email the temperature outside in degrees. [131048780030] |On my Mac, the degree symbol (°) is Option+Shift+8. [131048780040] |But I'm writing the email in Thunderbird on an Ubuntu 10.10 with the default US English keyboard layout. [131048780050] |What key combination do I use to get the degree symbol under X11? [131048780060] |EDIT: Gert successfully answered the question... but, bonus points for any easier to use keystroke than what's in his answer! [131048790010] |Ctrl + Shift + u (this will show an underlined u) and then the unicode value (in this case B0) and follow it by an enter. [131048800010] |You can also use + + 0 [131048810010] |Set up a Compose key. [131048810020] |On Ubuntu, this is easily done in the keyboard preferences, “Layout” tab, “Options” subdialog. Caps Lock is a good choice as it's pretty much useless (all remotely serious editors have a command to make the selection uppercase for the rare times it's needed). [131048810030] |Press Compose followed by two characters (occasionally three) to enter a character you don't have on your keyboard. [131048810040] |Usually the resulting character combines the two characters you type, for example Compose ' a enters á and Compose s s enters ß. [131048810050] |The degree symbol ° is one of the less memorable combinations, it's on Compose o o. [131048820010] |Bash script testing if a command has run correctly [131048820020] |I am working on a bash script that I would like to work for several types of VCS. [131048820030] |I am thinking of testing if a directory is a repo for a system by running a typical info command and checking the return code, success or error. [131048820040] |In pseudo code: [131048820050] |I can run a command, e.g. darcs show repo and use $? to get its return code. [131048820060] |My question is: is there a neat way to run and return the return code number in one line? for example [131048820070] |Or do I have to define a function? [131048820080] |An added requirement is that both stderr and stdout should be printed. [131048830010] |Well, it's not very pretty, but it's one way to do it inline: [131048830020] |By definition, if tests the exit code of a command, so you don't need to do an explicit comparison, unless you want more than success or failure. [131048830030] |There's probably a more elegant way to do this. [131048840010] |If automatically checks the return code: [131048840020] |You could also run the command and use &&(logical AND) or || (logical OR) afterwards to check if it succeeded or not: [131048840030] |Redirecting stdout and stderr can be done once with exec [131048840040] |The first two exec are saving the stdin and stderr file descriptors, the third redirects both to /dev/null (or somewhere other if wished). [131048840050] |The last two exec restore the file descriptors again. [131048840060] |Everything in between gets redirected to nowhere. [131048840070] |Append other repo checks like Gilles suggested. [131048850010] |As others have already mentioned, if command tests whether command succeeds. [131048850020] |In fact [ … ] is an ordinary command, which can be used outside of an if or while conditional although it's uncommon. [131048850030] |However, for this application, I would test the existence of the characteristic directories. [131048850040] |This will be correct in more edge cases. [131048850050] |Bash/ksh/zsh/dash version (untested): [131048850060] |In POSIX sh, there is no -ef (same file) construct, so a different test is needed to break out of the recursion when the root directory is reached. [131048850070] |Replace while ! [ "$d" -ef / ]; by while [ "$(cd -- "$d"; command pwd)" != / ];. [131048850080] |(Use command pwd and not pwd because some shells track symbolic links in pwd and we don't want that here.) [131048860010] |Getting wireless card to work in Debian with wpa_supplicant [131048860020] |I'm having difficulty getting a Netgear WG311 network card to work with Debian. [131048860030] |Here is a screenshot showing ifconfig wlan0, iwconfig, network interface cards, and wpa_supplicant configs: [131048860040] |I understand that the wpa doesn't work with third-party drivers and ndiswrapper; I've been told to use wpa_supplicant instead. [131048860050] |How can I get my windows driver to work with the Netgear WG311 wireless card? [131048860060] |UPDATE [131048860070] |ok had a look at the resource that Macieg gave me. [131048860080] |Finally got a connection but after restart it is gone. [131048860090] |the output of this command. wpa_supplicant -i wlan0 -D wext [131048860100] |just shows wpa_supplicant help commands. [131048860110] |UPDATE2 [131048860120] |Ok the connection comes up after restart, but only 5 minutes after i do this command [131048860130] |Anyone know how to fix this? [131048870010] |wpa_supplicant does have support for ndiswrapper and should be run like this: [131048870020] |Instructions how to install ndiswrapper drivers - however I never use ndiswrapper. [131048870030] |If it does not help you have to say what is the output of: [131048870040] |As a side note - to use WPA you need user program called wpa supplicant. [131048870050] |The most popular one is wpa_supplicant. [131048870060] |Therefore the WPA stack on Linux contains of 2 elements: [131048870070] |
  • Driver. [131048870080] |The best are mac8011 drivers (new stack on kernel). ndiswrapper is a workaround of using Windows drivers as the drivers which should work but it may happen that it does not.
  • [131048870090] |
  • WPA Supplicant. [131048870100] |In most cases wpa_supplicant
  • [131048870110] |Usually on top of that you want NetworkManager, connman or something like that. [131048870120] |PS. [131048870130] |Preferably add text instead of screenshots as: [131048870140] |
  • They are searchable by google/bing/$SEARCH_ENGINE
  • [131048870150] |
  • They are easier to read
  • [131048870160] |
  • They will not disapper for whatever reason (say - maintenance break of 3rd party server)