[131089130010] |Modifying a squashfs, [131089130020] |Obviously I don't want to actually modify a squashfs. [131089130030] |What I would like to do though is take an existing squashfs, a set of files and create a new squashfs which is identical to the old one except that the files in the set either replace similar files in the squasfs or are just added if there is no similar files. [131089130040] |Ok that last part sounded wierd. [131089130050] |So let me give an example: [131089130060] |there is a squashfs called mfs.squash. [131089130070] |Inside it there is a file ./a/foo. [131089130080] |I want to crewate a new squashfs which is identical to the old squashfs -- except that there is a new file a/b and I overwrite the ./a/foo with one of my specification. [131089140010] |Mount the squashfs: [131089140020] |Copy the squashfs to another place: [131089140030] |Copy the new files into place: [131089140040] |Make the new squashfs: [131089150010] |What is the "Online Certificate Status Protocol"? [131089150020] |I use privoxy. [131089150030] |In the user.action file i have a redirect rule and a few websites: [131089150040] |Ok! it's working great, e.g.: if i visit any "*twitter.com" URL it gets redirected to HTTPS! [131089150050] |But: with wireshark i can see some OCSP (Online Certificate Status Protocol) packets. [131089150060] |Question: What are these packets? [131089150070] |Why aren't there in HTTPS? [131089150080] |Is my redirection method with privoxy is secure? [131089150090] |Thank you for any tips/opinions! [131089160010] |It is a replacement for Certificate Revocation Lists. [131089160020] |Instead of having to download a (very huge) list of all revoked certificates for each root certificate, a remote server can be queried for just the status of a specific certificate. [131089170010] |OCSP is a way for programs that use X.509 certificates (such as anything using SSL, like web browsers for https: URLs) to check whether a certificate has been revoked because it was compromised. [131089170020] |You're not seeing them when using https: directly because you would need a packet tracer that can decrypt SSL and the SSL certificate used for encryption — which in most cases you won't have. [131089180010] |Does grep use a cache to speed up the searches? [131089180020] |I have noticed that subsequent runs of grep on the same query (and also a different query, but on the same file) are much faster than the first run (the effect is easily noticeable when searching through a big file). [131089180030] |This suggests that grep uses some sort of caching of the structures used for search, but I could not find a reference on the Internet. [131089180040] |What mechanism enables grep to return results faster in subsequent searches? [131089190010] |Not grep as such, but the filesystem itself often caches recently read data, causing later runs to go faster since grep is effectively searching in memory instead of disk. [131089200010] |Linux and *NIX systems make use of various caches which sit between the filesystems (abstracted through VFS) and the user level processes. [131089200020] |So it's not grep and it's not the filesystem doing the caching - it's the operating system. [131089200030] |The cache which is responsible for your grep performance is the VFS Buffer Cache. [131089200040] |Other caches are for inodes and directories but those wouldn't come into play here. [131089200050] |For more info see: Linux Documentation Project: 9.2 The Virtual File System http://tldp.org/LDP/tlk/fs/filesystem.html [131089210010] |why not kill -9 a process? [131089210020] |I am always very hesitant to run kill -9. [131089210030] |But I see other admins do it almost routinely. [131089210040] |I figure there is probably a sensible middle ground. [131089210050] |So: [131089210060] |
  • When should kill -9 be used?
  • [131089210070] |
  • What should be tried before doing it?
  • [131089210080] |
  • Is there any kind of debugging a "hung" process that could cause further problems?
  • [131089210090] |
  • When should kill -9 never be used?
  • [131089210100] |
  • What bad things can happen if kill -9 is used?
  • [131089220010] |Generally, you should use kill -15 before kill -9 so as to give the program a chance to clean up after itself. [131089220020] |(Programs can't catch or ignore SIGKILL, but they can and often do catch SIGTERM.) [131089220030] |If you don't give the program a chance to finish what it's doing and clean up, it may leave corrupted files around that it won't be able to understand once restarted. [131089220040] |strace/truss, ltrace (truss -u on Solaris, where it's a bit more helpful; ltrace I find too often doesn't present arguments to library calls in a usable format), and gdb are generally good ideas for looking at why a stuck program is stuck. [131089220050] |There are also useful /proc-based tools on Solaris, some of which have been ported to Linux (pstack is often helpful). [131089230010] |Never never do a kill -9 1. [131089230020] |Also avoid doing a kill on certain processes like mount. [131089230030] |When I have to kill a lot of processes ( say for example an X session gets hung and I have to kill all the processes of a certain user ), I reverse the order of the processes: eg. [131089230040] |ps -ef|remove all processes not matching a certain criteria| awk '{print $2}'|ruby -e '$A=stdin.readlines; A.reverse.each{|a| puts "kill -9 #{a}"}'|bash [131089230050] |Keep in mind that kill does not stop a process and release it's resources. [131089230060] |All it does is send a SIGKILL signal to the process, you could wind up with a process that's hung. [131089240010] |Randal Schwartz used to frequently post "Useless use of (x)" on lists. [131089240020] |One such post was about kill -9. [131089240030] |It includes reasons and a recipe to follow. [131089240040] |Here is a reconstructed version (quoted below). [131089240050] |(Quote abomination) [131089240060] |(.signature) [131089250010] |Is there a way to see details of all the threads that a process has in Linux? [131089250020] |For Windows, I think Process Explorer shows you all the threads under a process. [131089250030] |Is there a similar command line utility for Linux that can show me details about all the threads a particular process is spawning? [131089250040] |I think I should have made myself more clear. [131089250050] |I do not want to see the process hierarcy, but a list of all the threads spawned by a particular process [131089250060] |See this screenshot [131089250070] |How can this be achieved in Linux? [131089250080] |Thanks! [131089260010] |You can try top -H -p . [131089260020] |But please note on some Unix flavors '-H' option is not available in top command. [131089270010] |htop, a curses version of top, has a display option for showing all the threads for each process in a tree view: [131089280010] |The classical tool top shows processes by default but can be told to show threads with the H key press or -H command line option. [131089280020] |There is also htop, which is similar to top but has scrolling and colors; it shows all threads by default (but this can be turned off). ps also has a few options to show threads, especially H and -L. [131089280030] |There are also GUI tools that can show information about threads, for example qps (a simple GUI wrapper around ps) or conky (a system monitor with lots of configuration options). [131089280040] |For each process, a lot of information is available in /proc/12345 where 12345 is the process ID. [131089280050] |Information on each thread is available in /proc/12345/task/67890 where 67890 is the kernel thread ID. [131089280060] |This is where ps, top and other tools get their information. [131089290010] |

    Listing threads under Linux

    [131089290020] |

    Current provide answers

    [131089290030] |Hello Lazer, I would like to make it clear that each answer here is providing you with exactly what you have specified, a list of all threads associated with a process, this maybe be unobvious in htop as it by defaults lists all threads on the system not just the process, but top -H -p works better for example: [131089290040] |as a side note the thread with -90 is actually a realtime thread [131089290050] |

    but

    [131089290060] |There's also another option which is true CLI ps -e -T | grep [131089290070] |
  • -e show's all processes
  • [131089290080] |
  • -T lists all threads
  • [131089290090] |
  • | pipes the output to the next command
  • [131089290100] |
  • grep this filters the contents
  • [131089290110] |here's an example: [131089290120] |each of these has the same PID so you know they are in the same process [131089290130] |please don't hesitate to ask more questions [131089300010] |How to view and change kernel memory size? [131089300020] |How can i view the kernel's share of memory on a machine? [131089300030] |How can I increase it? [131089300040] |What should I consider before doing it? [131089310010] |Memory for what? [131089310020] |You can adjust kernel parameters in /etc/sysctl.conf [131089310030] |try running [131089310040] |for some example kernel memory-related parameters [131089320010] |Where does GSettings store its files? [131089320020] |I would like to have a look at the files the dconf-editor uses to read/write settings, and I'm assuming that they are managed by gsettings. [131089320030] |Where are these files stored on the system, and in what format? [131089330010] |Following some advice, I made a change and ran this: [131089330020] |Among the displayed results was ~/.config/dconf/user. [131089330030] |It's a binary file, a sort of database where GSettings stores stuff. [131089330040] |(I should have probably used lsof before asking, considering it was the only tool I knew at the time that would help me find out) [131089340010] |Installing Chrome on Linux without needing to be root [131089340020] |How can I install Chrome on Linux without needing to log in as root? [131089340030] |Note that I want to use Chrome, not Chromium. [131089340040] |If I go to the official download page, I get the choice between: [131089340050] |Can I somehow extract and install Chrome from the .deb or the .rpm without needing to be root? [131089340060] |Or is there another link that I missed? [131089350010] |I've successfully extracted the Fedora/OpenSUSE RPM into my home directory and ran chrome from there. [131089350020] |You simply need to make sure that the symlinks for the libraries are all there. [131089350030] |This assumes that the libraries area already installed, and $HOME/bin is in my $PATH. [131089350040] |I just ran: [131089350050] |Now, if you don't have all those libraries installed already, or there are other dependencies for the chrome binary that are unmet, you might need to build and install them in your homedir. [131089350060] |Google Chrome helpfully adds ~/chrome/opt/google/chrome/lib to the $LD_LIBRARY_PATH, so you could install those additional dependencies there. [131089360010] |Is there a way to intercept interprocess communication in Unix/Linux? [131089360020] |For intercepting/analyzing network traffic, we have a utility called Wireshark. [131089360030] |Do we have a similar utility for intercepting all the interprocess communication between any two processes in Unix/Linux? [131089360040] |I have created some processes in memory and I need to profile how they communicate with each other. [131089370010] |This depends a lot on the communication mechanism. [131089370020] |
  • At the most transparent end of the spectrum, processes can communicate using internet sockets (i.e. IP). [131089370030] |Then wireshark or tcpdump can show all traffic by pointing it at the loopback interface.
  • [131089370040] |
  • At an intermediate level, traffic on pipes and unix sockets can be observed with truss/strace/trace/..., the Swiss army chainsaw of system tracing. [131089370050] |This can slow down the processes significantly, however, so it may not be suitable for profiling.
  • [131089370060] |
  • At the most opaque end of the spectrum, there's shared memory. [131089370070] |The basic operating principle of shared memory is that accesses are completely transparent in each involved process, you only need system calls to set up shared memory regions. [131089370080] |Tracing these memory accesses from the outside would be hard, especially if you need the observation not to perturb the timing. [131089370090] |You can try tools like the Linux trace toolkit (requires a kernel patch) and see if you can extract useful information; it's the kind of area where I'd expect Solaris to have a better tool (but I have no knowledge of it). [131089370100] |If you have the source, your best option may well be to add tracing statements to key library functions. [131089370110] |This may be achievable with LD_PRELOAD tricks even if you don't have the (whole) source, as long as you have enough understanding of the control flow of the part of the program that accesses the shared memory.
  • [131089380010] |This will show what a process reads and writes: [131089380020] |strace -ewrite -p $PID [131089380030] |It's not clean output (shows lines like: write(#,) ), but works! (and is single-line :D ) You might also dislike the fact, that arguments are abbreviated. [131089380040] |To control that use -s parameter that sets the maxlength of strings displayed. [131089380050] |It catches all streams, so You might want to filter that somehow. [131089380060] |You can filter it: [131089380070] |strace -ewrite -p $PID 2>&1 | grep "write(1" [131089380080] |shows only descriptor 1 calls. [131089380090] |2>&1 is to redirect stderr to stdout, as strace writes to stderr by default. [131089390010] |RAMDISK incomplete write error kernel panic [131089390020] |I am building Linux Kernel 2.6.36.4 on a Dell Laptop which has Linux Kernel 2.6.35.11 running. [131089390030] |BTW I got the source from kernel.org. [131089390040] |The source had few syntax errors which I fixed in the process and finished building Kernel. [131089390050] |After reboot, I keep getting following error: [131089390060] |RAMDISK: incomplete write error(6022 != 28860) write error Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown-block(0, 0) Pid: 1, comm: swapper Not tainted 2.6.36.4 #2 Call Trace: ? printk.... [131089390070] |I followed following steps while building the source: [131089390080] |
  • tar xvf linux-2.6.36.4.tar.bz2
  • [131089390090] |
  • sudo cp /boot/config-2.6.35.11generic ~/linux-2.6.36.4/.config
  • [131089390100] |
  • cd ~/linux-2.6.36.4
  • [131089390110] |
  • make menuconfig
  • [131089390120] |
  • sudo make
  • [131089390130] |
  • sudo make modules_install
  • [131089390140] |
  • sudo make install
  • [131089390150] |
  • sudo update-initramfs -k 2.6.36.4 -c
  • [131089390160] |
  • sudo update-grub
  • [131089390170] |I tried following things after my internet search: [131089390180] |
  • After reboot with working kernel, ran sudo update-initramfs -u -k all
  • [131089390190] |
  • Ran fsck
  • [131089390200] |However I still get this error for every attempt to boot using 2.6.36.4 [131089390210] |Has anybody come across such an issue and what do you suggest in this context? [131089390220] |Thank you in advance! [131089390230] |EDIT: Some developers have increased the ramdisk size to few MBs from 4096 default. [131089390240] |Is that a good idea? [131089400010] |The source had few syntax errors So it would not even compile (the lowest form of test ). [131089400020] |If I understand correctly then I would be highly surprised if it did not have problems. [131089400030] |There is probably no way you could have fixed all the bugs by fixing compilation errors, you would need a lot of knowledge of the code, and what has changed (so you can focus in). [131089410010] |L2TP IPsec VPN client configuration [131089410020] |I have Linux (Fedora) box and I want to conject to VPN described as "L2TP IPsec VPN" one. [131089410030] |I have got following credentials: [131089410040] |user=xxxxxxxxx [131089410050] |pass=xxxxxxxxx [131089410060] |VPN server= XXX.XXX.XXX.XXX [131089410070] |IPsec key=xxxxxxxxx [131089410080] |I tried to use NetworkManager, vpnc with no luck. [131089410090] |What software to use? [131089420010] |iptables log file [131089420020] |I use ubuntu I want to know where is the log file of iptables? [131089420030] |I find /var/log/messages but I am not sure find correct log file or not. [131089420040] |and also want to know when this log file is changed? [131089420050] |I add 1 rule to prevent my machine to respond to ping message but when I ping my machine I didn't any changes to /var/log/messages [131089430010] |Because it can easily fill up your logs, the default is to not log. [131089430020] |Add a jump to the LOG target, which will log to the kernel log (which you can see with dmesg or at wherever syslog is configured to write that for your distro). [131089430030] |In your LOG-target rule, you can set --log-level and --log-prefix to help organize the messages and keep them separate from other kernel messages. [131089430040] |LOG is a "non-terminating target", so rule traversal will continue on to the next rule — you can basically add logging right above your existing rules without affecting them. [131089440010] |IPTables doesn't log unless you add a rule to add a log entry. [131089440020] |This is typically done with the -j LOG destination. [131089440030] |The log entry is sent to the kernel log, and your syslog daemon determines where kernel log entries go, which seems to be /var/log/messages in your case. [131089440040] |If you want to block pings from a certain host (a fictional 123.456.789.10 for example), and log all those packets, run: [131089450010] |log out/in because of google-chrome-stable package update? [131089450020] |After i configured my Fedora 14 to do auto updates: [131089450030] |I get this funny message. [131089450040] |Why do i have to log out/in because of the google-chrome-stable package has been updated? [131089460010] |I agree that it's a funny message. [131089460020] |You don't really have to log out and log in again, but you should make sure that the program and anything using any shared libraries are restarted. [131089460030] |I think the message was just chosen because logging out and in again is the most simple way to make sure that happens for users who don't know what they're doing. [131089460040] |(For some things, like system libraries, it'll tell you to reboot.) [131089470010] |PackageKit has the ability to notify when an Application, Session or the System needs to be restarted. [131089470020] |For some reason, the Google Chrome package is causing PackageKit to notify you that your login session needs to be restarted. [131089470030] |I think that the way PackageKit manages this kind of thing is a worst-case scenario, so it assumes that the whole session needs to be restarted, not just the application. [131089470040] |It might be worth submitting a bug report to Google about it. [131089480010] |What are the kernel, the "GNU tools and utilities", the shell and the Window Manager? [131089480020] |Among the many components of a Linux system, I find myself confused (as a newbie) about what exactly is the kernel or what is GNU's part. [131089480030] |I understand some basic concepts of this, but where is the line between a shell and the Window Manager? [131089490010] |

    Kernel

    [131089490020] |The kernel manages resources. [131089490030] |Resources include processor time, memory, and peripherals. [131089490040] |It does this by directly communicating with the resources and exposing an interface to userspace. [131089490050] |

    Userspace Tools (sometimes includes GNU tools and utilities)

    [131089490060] |The Userspace Tools include basic utilities like ls, cat, dd, ln, mount, etc. [131089490070] |They allow a user to work with resources that the kernel provides. [131089490080] |Linux (as opposed to BSD, OSX, and other Unices) is the primary user of the GNU tools, but not even all Linux systems use these; an alternate set of tools for Linux is provided by Busybox. [131089490090] |

    Shell

    [131089490100] |The shell provides the environment that allows the user to use the Userspace Tools. [131089490110] |Example shells include bash, ksn, zsh, and fish. [131089490120] |They typically provide a prompt at which the user can enter commands that launch the userspace tools. [131089490130] |

    Window Manager

    [131089490140] |This is a much higher layer, there usually exists a display server, which is responsible for managing graphical, audio, and I/O resources, and providing an interface to higher-level tools. [131089490150] |Usually, a display manager is the layer above the display server and can provide things like user login management, and session management. [131089490160] |Above that is typically a window manager. [131089490170] |The window manager provides regions in which applications can render their content, it also allows the user to interact with these regions by moving, resizing, and reordering them. [131089500010] |If you're using 'bash' as your shell, that's a GNU utility. [131089500020] |The 'coreutils' package on your system contains GNU software, things like mv, ls, rm, etc. [131089500030] |The Kernel is not something you interact with directly, but through other software on your system. [131089500040] |To be very general in definition, a kernel is provides a means for software to interact with the hardware on your system, by reading in your key presses and mouse movement, reading and writing data to your disks, and scheduling and performing the computation from software running on your computer. [131089500050] |There are a lot of details I'm ignoring/glossing over, you might benefit from reading the Kernel Wikipedia page. [131089500060] |Your window manager is most likely not GNU software, but from other software projects (Gnome, KDE, XFCE, etc). [131089500070] |However, they all rely on GNU software to run, using the GNU C Library (glibc) and the GNU Compiler collection (gcc) for example. [131089500080] |Also, much of the software on your system is licensed with the GNU General Public License, or the GPL, so you're benefiting from GNU's license. [131089510010] |How can I increase open files limit for all processes? [131089510020] |I can use ulimit but I think that only affects my shell session. [131089510030] |I want the limit increased for all processes. [131089510040] |This is on Red Hat. [131089520010] |According to the article Linux Increase The Maximum Number Of Open Files / File Descriptors (FD), you can increase the open files limit by adding an entry to /etc/sysctl.conf. [131089520020] |Append a config directive as follows: [131089520030] |Then save and close the file. [131089520040] |Users need to log out and log back in again to changes take effect or they can just type the following command: [131089520050] |You can also verify your settings with the command: [131089530010] |Justin's answer tells you how to raise the number of open files available total to the whole system. [131089530020] |But I think you're asking how to raise the per-user limit, globally. [131089530030] |The answer to that is to add the following lines to /etc/security/limits.conf: [131089530040] |(Where the * means all users.) [131089530050] |There's some summary documentation in that file itself and in man limits.conf. [131089530060] |This is implemented via the pam_limits.so module which is called for various services configured in /etc/pam.d/. [131089530070] |And, I have to admit, I have no idea where that 1024 default comes from. [131089530080] |And believe me, I looked. [131089530090] |I even tried without the pam_limits module configured, and it's still there. [131089530100] |It must be hard-coded in somewhere, but I'm not exactly sure where. [131089540010] |Windows Domain Server on FreeBSD [131089540020] |Is it possible (and if it is, could you point me in the proper direction) to set up a Windows Domain server (active directory and all that stuff) on a FreeBSD machine? [131089540030] |I do understand that I need to use Samba (but not sure whether Samba 3 will be enough or should I go with the Samba 4 alpha versions). [131089540040] |As I don't know much about Windows domains, I'm not sure whether I need to set up a DNS service on the network. [131089540050] |In the end, I need to set up (as a CS project) a virtual FreeBSD machine that will serve Windows domain/active directory services for virtual machines running Windows. [131089540060] |With all the nice stuff like single-login, settings, network directories and stuff. [131089540070] |I know it's a pretty general question, but after doing my research I was left pretty much confused - Samba 4 is a advertised as a rewrite of Samba 3, with support for AD, being primary domain controller and many other features. [131089540080] |It sounds as if Samba 3 didn't have all that. [131089540090] |So, is it possible? [131089540100] |What should I use? [131089540110] |Any other tips for me? [131089550010] |Reset all groups to default [131089550020] |I recently set up some groups and users on my machine, and now I want to remove all those users and all those groups. [131089550030] |Somewhat stupidly, I read on an Ubuntu forum or something like that that it was possible to just edit the textfile that contains all the group info. [131089550040] |But now, whenever I start a terminal, I get [131089550050] |which is obviously not The Right Thing. [131089550060] |So, how can I remove all the groups and users I created a few months ago? [131089560010] |It's true that you can just edit the text file /etc/group. [131089560020] |It sounds like you have either deleted too much, or else left the file in a broken state. [131089560030] |It needs to be one entry per line, in this format: [131089560040] |for "empty" groups, or [131089560050] |for groups with one member, or [131089560060] |for groups with multiple members. [131089560070] |I put "empty" in quotes because you can also be a member of a group if it's set as your primary group in /etc/passwd — it's the fourth field there. [131089560080] |(The "x" can actually be a password hash, but that's very rarely used these days. [131089560090] |It's almost certainly going to be all "x"s on your system.) [131089560100] |If you have extraneous blank lines, or badly formatted ones, that could mess things up. [131089560110] |The file also needs to have world-readable permissions. [131089560120] |Ideally, you have a backup of the file from a few months ago. [131089560130] |But, I understand the real world. :) [131089560140] |There is also a file /etc/gshadow. [131089560150] |If you are using group passwords (and like I said, almost nobody does), their hash values should be kept here instead of /etc/group, because this file is supposed to be only readable by root. [131089560160] |This file can also be used to designate group administrators, who have the ability to add members using certain command-line tools. [131089560170] |(I don't know of anyone using this functionality either.) [131089560180] |This file should be kept in sync with /etc/group. [131089560190] |Using command-line or GUI tools to manage groups will do that automatically, which is another reason to consider using those rather than just editing the file. [131089560200] |However, in actual practice, this file can be quite out of sync and even all munged up and your system will generally work fine. [131089560210] |It won't result in the error you're seeing. [131089560220] |Gilles adds in a comment below that you could look in the /etc/gshadow file for information about how things should be. [131089560230] |This won't have all the important information (like group ID number), but will have the group names and probably group membership information. [131089560240] |There may also be an /etc/group- file (note the - at the end) — this is the standard backup file created by the some tools for manipulating /etc/group/. [131089560250] |If you're lucky, that'll have everything you need. [131089560260] |(Often it's not much help, though, because it's just one backup. [131089560270] |And programs like gpasswd don't update it.) [131089570010] |You can edit /etc/passwd and friends (/etc/group, /etc/shadow, /etc/gshadow) manually, but there's more risk of making a mistake (such as deleting a line you didn't mean to) than when using higher-level tools. [131089570020] |Use vigr rather than calling your editor directly on /etc/group (vigr -s for /etc/gshadow, vipw (-s) for /etc/passwd (/etc/shadow)). [131089570030] |This has two benefits. [131089570040] |First, these utilities lock the files to make sure someone or some other tools doesn't edit them at the same time. [131089570050] |Second, they create a backup (/etc/group-, …). [131089570060] |Check if you have /etc/group-, or perhaps a backup from another tool (e.g. /etc/group.bak). [131089570070] |It might be a little out of date, but it'll still be better than starting from scratch. [131089570080] |If you haven't modified /etc/gshadow, you can find the names of the missing groups, but not the numbers. [131089570090] |If you've erased entries below 100, their names and numbers identical across all installations of a given version of Ubuntu (you'll still have to add local users to system groups as desired). [131089570100] |Entries between 101 and 999 are allocated when a package is installed, and I can't think of an easy way to reconstruct these. [131089570110] |Entries above 1000 are whatever you created. [131089570120] |I recommend setting up version control for /etc. [131089570130] |If you'd done this, you could just restore from a working version. [131089570140] |See Is there a system journal that I can install?. [131089570150] |In a nutshell: [131089570160] |Your changes will be saved (“committed”) once per day, before and after installing packages, or whenever you run bzr commit from /etc. [131089570170] |To see what you've changed, run bzr diff. [131089570180] |To restore the latest committed version, run bzr revert group from /etc. [131089580010] |Is it as simple as the passwd file has your primary group as 1002, but you have removed this group from the group file. [131089580020] |So there is no way to resolve the name. [131089580030] |Also files will still have the user and group ids that they have before. [131089580040] |It is not a good idea to add users and groups, better to disable them. [131089580050] |If there is a reference some where left on the system, then you add a new user/group that takes the old id then it can look like they created the file, give them access to personal information etc. [131089590010] |Use this tag when asking how to do things that are typically considered administrative tasks requiring root privileges, when you don't have root privileges. [131089590020] |For example, installing software, accessing hardware, or sandboxing suspicious programs. [131089600010] |Doing administration tasks without administrative privileges [131089610010] |Spotlight/Search on Fedora [131089610020] |I apologize for the MacOSX reference, but is there an equivalent of Spotlight-Search in Fedora, specifically FC14 x86_64 ? [131089620010] |A few releases ago, there was a big push for something called Beagle, which is a "personal space" search tool. [131089620020] |I'm not quite sure why that push died out, but the package is still part of Fedora and you can install it with yum install beagle-gnome if you are a Gnome user or yum install kerry for the KDE front-end. [131089620030] |There are also optional beagle-firefox, beagle-thunderbird, and beagle-evolution packages which cause those programs' data to be indexed. [131089620040] |There's a number of full-text indexers / search engines available for Fedora as well, like Namazu, Xapian, Strigi, and Pinot, but as far as I can see, Beagle had the greatest ambition of being an integrated part of everyone's desktop environment, with a slick UI and everything. [131089620050] |Namazu and Xapian are mainly focused on the backend, not the user experience. [131089620060] |Strigi also seems to be a dead project. [131089620070] |And Pinot looks promising but also appears to be one of those one-guy projects where the developer's continuing interest in the project may be a concern. [131089620080] |Edit: Missed a big one! [131089620090] |There's also Trackeryum install tracker-search-tool. [131089620100] |This seems to be under active development as part of the Gnome project, which is very promising. [131089620110] |While it's designed to be desktop-environment-neutral, I'm not finding a current KDE front-end. [131089620120] |So, there you go. [131089620130] |Depending on how satisfying that is, the answer to your question is either "yes" or "no". :) [131089630010] |Is there a good command line tool for converting to and from FLAC audio format? [131089630020] |Ideally, I want to convert from MP3 to FLAC and back. [131089630030] |I also need to be able to script this. [131089640010] |It's called flac, oddly enough. [131089640020] |It's somewhat painful to use, or was back when I scripted a transcoding job with it. [131089650010] |sox version 13 and up supports FLAC, along with many other formats. sox can do many things to an audio file, not just convert from one format to another. [131089650020] |It is to audio what ImageMagick is to graphics. [131089660010] |The fundamental tool for sound format conversions and simple transformations is SoX, the Swiss Army knife of sound processing programs. [131089660020] |If you're running Debian, support for writing MP3 in sox is broken in lenny and squeeze (and as far as I know the same problem affects Ubuntu 10.04 and 10.10). [131089660030] |This bug was fixed in early March 2011, so grabbing the latest source (or grabbing a binary for sox 14.3.1-1build1 or newer) and recompiling it should work. [131089660040] |An alternative for encoding to MP3 is lame. [131089660050] |It doesn't read flac, but you can use sox or flac to produce a wav file, then lame. [131089670010] |Enable kill X.org with an own key combination [131089670020] |I've just read Enable kill X.org with ctrl+alt+backspace and am really happy, that control-alt-backspace no longer kills my X-Server. [131089670030] |However, I'd like to have a way to kill it; it just should be something more complicated than what I type by accident ten times a day. [131089670040] |Is there a way to define an own binding for the kill? [131089670050] |I'd like to use control-alt-meta-shift-backspace or alike and from the line [131089670060] |it's not obvious how to do it (unless it is "terminate:ctrl_alt_meta_shift_bksp" which doesn't seem to work). [131089680010] |XkbOptions refers to a rule defined in the XKB rules file, normally /usr/share/X11/xkb/rules/base, which will look like: [131089680020] |That in turn picks up the definition from the terminate symbols file, normally /usr/share/X11/xkb/symbols/terminate. [131089680030] |I'm not sure if you can just add more modifiers to the type="CTRL+ALT" line there or if there's limits on the type value. [131089680040] |Documentation on XKB customization can be found at http://www.x.org/wiki/XKB and may be able to help fill in some of the gaps in this answer. [131089690010] |Arch Linux: python and python2 are in conflict. [131089690020] |I've recently installed Arch Linux onto my primary (Ubuntu) computer, and it is working really well for me. [131089690030] |It's fast, configurable, basically a faster version of Ubuntu. [131089690040] |Since compiz-fusion isn't installed by default, I'd like to see how much it would impact my performance, but I get this really nice and descriptive error message when I run pacman to install it: [131089690050] |haven't installed python3, and my only installed Python version is Python 2.7.1, which doesn't seem to be conflicting with anything. [131089690060] |Google didn't turn up any results, so has anybody come across an error like this before? compiz-fusion isn't the only package which fails to install because of this python conflict, so quite a few nice packages (like python-qt) are uninstallable for me. [131089690070] |Any help is help for me. [131089690080] |Thanks! [131089700010] |On Arch Linux, the python package contains python 3, and the python2 package contains python 2. [131089700020] |Try pacman -Sy python python2 first. [131089700030] |Once both of those packages are installed, compiz-fusion and python-qt should install. [131089700040] |Oh, and you can't have updated your system in a while. dbus-python doesn't depend on the python package any more, but on the python2 package (since october 2010, according to SVN). [131089710010] |You should never install a package with pacman -Sy $package. [131089710020] |It will eventually break your system, eat your kitten or worse, but you apparently did so. [131089710030] |The move from python being python2 to python3 was made last year by the archlinux developers (news article). [131089720010] |Why does one linux distro run hotter than another on laptop? [131089720020] |I had been using Arch Linux 64 bit on a Gateway P6860FX for about two years, and recently switched to Ubuntu (also 64 bit). [131089720030] |When I type on the keyboard, my left hand feels a lot more warmth than before, and the air coming out of the exhaust port is definitely hotter. [131089720040] |(Odd, right now there's no extra heat at all...but anyway...) [131089720050] |Only minutes ago did I discover there are ways to monitor the CPU temperature. [131089720060] |I have no idea what it was for Arch, but on Ubuntu it's 60-something, rising to 88 when I run heavy number-crunching software for a few minutes. [131089720070] |There are good Q&As on this and Superuser on cleaning out dust, and ways to help the computer stay cool. [131089720080] |My question is: why would one linix distro run hotter than another? [131089720090] |Is there some daemon running in one and not the other, or some device driver difference, or perhaps one but not the other sets the "run really hot" bit in the CPU's mode register, or what? [131089720100] |Can knowing this answer help me select the next distro to try? [131089720110] |Given several candidate distros that are both 64 bit and meet various requirements, can we predict which ones are going to make this machine run hot? [131089730010] |Assuming that both are using the same upstream kernel, start by checking the differences between kernel configs for each. [131089730020] |In Debian (and I suppose Ubuntu), this will be found in "/boot/config-2.6.32-5-686-bigmem". [131089730030] |I would expect that a younger kernel is more likely to run cooler (latest and greatest mantra). [131089730040] |Are they both running a similar selection of packages? [131089730050] |Note that Ubuntu will by default install a heck of a lot more stuff, and some of these would be long-running applications (e.g. daemons), which may ask more of the CPU attention than normal applications. [131089730060] |Arch leans more on minimalism. [131089740010] |Ubuntu also defaults to CPU-hungry eye candy (animated cursors and the like). [131089750010] |As geekosaur and Tshepang are saying: Assuming that both distributions are using the same kernel, remaining differences should boil down to default configuration settings. [131089750020] |It could be worth exploring a bit before switching distributions (changing settings is presumably quicker than installing a new OS), I suggest [131089750030] |
  • Check System >Preferences >Appearance >Visual Effects - you may prefer "none" to put less load on the CPU and graphics.
  • [131089750040] |
  • Install and run PowerTOP, a Linux utility to help track down power consumption offenders. [131089750050] |(It's available from the Ubuntu software center.)
  • [131089750060] |
  • There are a whole bunch of other settings that may affect power consumption, but PowerTOP will probably guide you to the ones that are most relevant.
  • [131089760010] |How to get files downloaded by yum? [131089760020] |I install a software package on Fedora from the terminal using the command like [131089760030] |# yum install live-usb-creator [131089760040] |Now at first it downloads some of the packages from online repository and then later it install that files on to the system. [131089760050] |Now I want a backup of that downloaded files so that in future I can install that packages directly without connecting to internet. [131089760060] |But the problem is that I don't know where does it download that all files. [131089760070] |If it is possible to get those files then I have two systems having Fedora 13 i686 installed on both of them. [131089760080] |One has Internet connection and other has nothing like that. [131089760090] |Now I want to install sofwares using yum on the first system from the online repository and after that can I install the same sofwares on my second system from the downloaded files on the first system? [131089770010] |I googled and found this. [131089770020] |You can install the downloadonly plugin using yum install yum-downloadonly. [131089770030] |Then you can use the flag --downloadonly for the concerned package. [131089770040] |This, probably, isn't the exact solution to your question. [131089770050] |But I think it would be useful. [131089770060] |Also checkout Q.14 on this. [131089770070] |It might help as well. [131089780010] |Set [131089780020] |in yum.conf [131089780030] |Then future rpms should stay under /var/cache/yum [131089790010] |(some) FTP clients timing out on data connection [131089790020] |I'm trying to connect to a server via FTP and I'm having some problems. [131089790030] |Server is a CentOS with Pure-FTPd. [131089790040] |My usual client is lftp which on this server gets stuck at "Making data connection". [131089790050] |All google results about this suggest setting ftp:ssl-allow no but that didn't help in my case. [131089790060] |I also tried other clients and experienced the same behavior with ncftp and the graphical gnome gftp: connection is made but no data is transferred even for a simple ls. [131089790070] |However, connection and data transfer work with basic ftp client and from a Perl script using Net::FTP module. [131089790080] |Any suggestion on what options I can try to get the other clients working? [131089800010] |Try using ftp in passive mode the relevant setting in lftp is ftp:passive-mode [131089810010] |Need help with re-partitioning [131089810020] |First of all: I'm just a beginning Linux-User ;-) [131089810030] |I've set up a server with OpenVZ support. [131089810040] |I wrote a backup script that's dumping the vz containers from time to time. [131089810050] |But some containers are not backed up due to insufficient disk space. [131089810060] |So I ran df -h which gives me this: [131089810070] |So it seems the problem is the root partition (?) which has only 362M available. [131089810080] |As /srv has 174G which is may too much I want to "give" some disk space from there to root. [131089810090] |Can someone explain how I can do this and maybe explain it a little bit. [131089810100] |I don't really know much about this mounting/partioning on Linux (yet). [131089810110] |Thanks! [131089820010] |The simplest soulution is to create a backup of /srv. [131089820020] |This is only 188MB it should fit in /srv.tar. [131089820030] |Then delete the sda4 partition and create 2 others. [131089820040] |You can use cfdisk /dev/sda or any other partitioning software. [131089820050] |
  • sda4 extended partition (you can only have 4 otherwise)
  • [131089820060] |
  • sda5 for /srv
  • [131089820070] |
  • sda6 for the backups
  • [131089820080] |Create filesystems on sda5 and sda6 mount sda5 to /srv and restore the backup. [131089820090] |Mount /sda6 to the directory where you want the backups stored. [131089820100] |For example: [131089820110] |Don't forget to modify /etc/fstab. [131089820120] |Add the new backups filesystem and change the device for /srv. [131089820130] |In the future it would be a good idea to use LVM. [131089820140] |That makes this sort of problems easier. [131089830010] |How to raid-mirror existing root partition? [131089830020] |I'd like to mirror my existing root (and only) partition on an SSD to another disk. [131089830030] |It should be a sort of RAID-1, just asymmetric*. [131089830040] |I know there's the option mdadm --write-behind, which should do it. [131089830050] |But I have no idea if it is possible with preserving the context of the existing partition. [131089830060] |I imagine it like [131089830070] |
  • create the "slave" partition
  • [131089830080] |
  • setup the RAID telling it that the slave partition is not initialized
  • [131089830090] |
  • let it initialize it by cloning the master partition
  • [131089830100] |but I'm probably too optimistic, aren't I? [131089830110] |* All reads should access the first disk and writes should be considered finished when the first disk is written. [131089840010] |I have an idea. [131089840020] |I tested this with small filesystems on loop devices I recommend you do the same before trying it yourself. [131089840030] |In this answer /dev/sda is your disk with the important data and /dev/sdb is the new emtpy disk. [131089840040] |
  • Create a degraded RAID1 array from the empty disk. [131089840050] |This is important! [131089840060] |
  • Then shrink the filesystem on the disk you want to mirror. [131089840070] |(Hopefully it's supported.) [131089840080] |This is needed because the RAID arrays have a header and the full filesystem won't fit on the array.
  • [131089840090] |
  • Copy the data to the new degraded array. [131089840100] |
  • Add the original disk to the array. [131089840110] |
  • You can watch the synchronization progress. [131089850010] |You can create an mdraid RAID-1 array starting with an existing partition. [131089850020] |First, you need to make room for the mdadm superblock, which means you need to shrink your filesystem a little. [131089850030] |At the moment, the normal superblock format is 0.9. [131089850040] |Its location is between 128kB and 60kB from the end of the partition, it is 4kB long, and it starts on an address that is a multible of 64kB. [131089850050] |So shrink your filesystem by 128kB, or more precisely to ((device_size mod 64kB) - 1) * 64kB. [131089850060] |If you want more than 2TB per stripe, you need the 1.0 superblock format, which isn't supported out-of-the-box by all distributions yet. [131089850070] |The 1.0 superblock is at the end of the device, which I understand to mean that you only need to shrink your filesystem by 8kB. [131089850080] |Now that you've shrunk the filesystem, you can create the array. [131089850090] |First create a degraded array with just the existing data. [131089850100] |Make sure the filesystem isn't mounted at this point. [131089850110] |For your use case the write-intent bitmap must be on a separate partition. [131089850120] |Use -e 1.0 to use the newer version-1 superblock format. [131089850130] |Now you can mount the filesystem in /dev/md0. [131089850140] |Add the second disk at your leasure. [131089850150] |The data will be copied to the new drive in the background. [131089850160] |I've created a mirrored array like this, but without write-behind mode. [131089850170] |I don't think write-behind mode would invalidate the procedure. [131089860010] |Full text search for man pages [131089860020] |apropos works great for searching manual page names and descriptions. [131089860030] |Is there a similar command for searching the entire contents of the manual pages? [131089870010] |By using the command man man we can see that we have two options. [131089870020] |This is on a RHEL 5 system [131089880010] |two options for you. first, you can try this script: [131089880020] |save it as searchman.sh or some-such, and, optionally make it executable and stick it somewhere in your $PATH. [131089880030] |The just run sh searchman.sh . (note: i've just thrown this together quickly now. [131089880040] |I've tested it and it looks to be all good, but it might need tweaking here and there.) [131089880050] |secondly, and especially if you're using Ubuntu, you can use http://manpages.ubuntu.com/ - there are a number of full-text search options available. [131089890010] |Set default nice value for a given user (limits.conf) [131089890020] |Could someone tell me how to set the default value of nice (as displayed by top) of a user? [131089890030] |I have found that /etc/security/limits.conf is the place but if I put either: [131089890040] |It doesn't work (while it should, right?). [131089890050] |Note that I've rebooted since then. [131089890060] |Thank you very much in advance for any help. [131089890070] |I'm using debian unstable (uptodate). [131089890080] |Context: [131089890090] |At my work, we have a local network: everyone has its own computer and everyone can create an account on someone else's machine if one likes. [131089890100] |The rule of thumb is simply that if you work on someone else computer, please nice your processes (nice 19). [131089890110] |I would like to set the default nice value for a given user to 19 once and for all (he pretends he forgets all the time to nice). [131089900010] |I can confirm that that doesn't work on my system either. [131089900020] |The docs say "kernel 2.6.11 and up", and I'm on Fedora rawhide with kernel 2.6.38-rc6. I wonder if it is scheduler-dependent, and doesn't work with the introduced-in-2.6.23 CFQ ("Completely Fair Scheduler"). [131089900030] |Something that will work, though, is the impossible-to-search-for-because-of-its-horrible-name and — the auto-nice daemon. [131089900040] |See http://and.sourceforge.net/. [131089900050] |This is available from Fedora with yum install and, but unfortunately doesn't seem to be in EPEL. [131089900060] |And it's in Debian too: apt-get install and. [131089900070] |If you are using a modern distribution, though, there's an Even Better Way. [131089900080] |You can use the tools from libcgroup to set up a kernel-level cgroup limiting CPU shares, and to automatically "classify" that user's processes into this cgroup. [131089900090] |With this, you can also prioritize I/O, and limit memory usage (including share of the disk cache). [131089910010] |I believe the correct format is: [131089910020] |This is an example of the settings I am using in production (obviously with real users/groups). [131089910030] |The nice setting is to determine the maximum nice value someone can set their process to, not their default priority. [131089920010] |Horizontal file concatenation [131089920020] |Is there a Linux command like cat that joins files with the same number of lines horizontally? [131089930010] |join should do the trick - You just need to prefix the lines with an identical ID. [131089940010] |paste may do the trick. [131089940020] |At least some of the time, you don't need to have a "key" to concatenate the lines. [131089950010] |How to change mount points [131089950020] |Hi, [131089950030] |I'm not very deep into this mounting/unmouting think on Linux, so here goes my question: [131089950040] |With df -h I get the following overview: [131089950050] |I'm using this machine as web server where all web related stuff resides under /srv/. [131089950060] |As this is part of / I'm out of disk space here. [131089950070] |I saw /home having 44G available web space, which is pure nonsense in my case. [131089950080] |So I want to have /home not as own partition (rather part of /), but /srv as own partition, grabbing the space consumed by /home. [131089950090] |So after that df -h should look like this (/home replaced by /srv): [131089950100] |What have to do to get here? [131089960010] |Before you do anything you're going to have to figure out a place to keep the 180 megabytes of data that /home is currently taking up. [131089960020] |I'd recommend repartitioning the current /dev/sda9 into, say, two gigs for /home and 42 for /srv. [131089960030] |Next up you're going to have to be a little tricky. [131089960040] |This is all best accomplished in single user mode so that only root is logged on and you don't run into trouble with someone trying to access /home while you're moving it around. [131089960050] |You've got a decent amount of room in /var, so we'll use that as a temporary holding space: mkdir /var/tmp/oldhome [131089960060] |cd /home [131089960070] |`tar -cvf - ./ | ( cd /var/tmp/oldhome &&tar -xvf - ) [131089960080] |Now we've got /home backed up to someplace while we repartition /dev/sda9 into 2 gigs for /dev/sda9 and 42 gigs for /dev/sda10 [131089960090] |Once you've finished repartitioning and creating new filesystems (I'm going to assume you know how to do this) you'll need to edit /etc/fstab. [131089960100] |Somewhere in there you'll see a line saying something along the lines of [131089960110] |/dev/sda9 /home ext3 defaults 0 2 [131089960120] |Assuming that you've made /dev/sda9 the smaller of the two partitions, you can leave that line unchanged; you'll just need to add [131089960130] |/dev/sda10 /srv ext3 defaults 0 2 [131089960140] |directly underneath. [131089960150] |Once those lines have been added, simply enter [131089960160] |mount /home ; mount /srv [131089960170] |and check with df -h to make sure both partitions are mounted. [131089960180] |Then replace the data from /home: [131089960190] |cd /var/tmp/oldhome [131089960200] |tar -cvf - ./ | ( cd /home &&tar -xvf - ) [131089960210] |Reboot your system in multi-user mode and everything should work. [131089970010] |As a quick but not very beautiful solution, you could remount a directory on one of your less-used disk to some point under /srv and move something there to clear a bit of space on /srv proper. [131089970020] |Read about --bind in man mount. [131089970030] |It boils down to something like mount --bind /some/spare/dir /busy/dir/mountpoint. [131089970040] |It works on any modern Linux. [131089970050] |Suppose that you have /srv/some/stuff. [131089970060] |
  • mkdir /home/offload/some/stuff — this is on the 44G free space partition
  • [131089970070] |
  • mv /srv/some/stuff /srv/some/previous-stuff — temporarily, free up the name
  • [131089970080] |
  • mount --bind /home/offload/some/stuff /srv/some/stuff — now some/stuff is on another partition!
  • [131089970090] |
  • mv /srv/some/previous-stuff/* /srv/some/stuff — put things back under the original name, free up the space on /srv.
  • [131089980010] |Since you have plenty of room in /home, move all the stuff from /srv into /home, then (optionally) move the stuff that was in /home to the root partition. [131089980020] |The simplest solution, if you don't mind a few minutes' downtime, is to move /srv into the larger partition and symlink it: [131089980030] |If you really want to move /home to the root partition, then it takes a few renames. [131089980040] |I assume there's no directory called /home/srv or /srv/srv. [131089980050] |Finally (if you're not using the symbolic link method) edit /etc/fstab to change the mount point: on the line that begins with /dev/sda9 /home, replace /home by /srv. [131089990010] |best bittorrent client on Linux [131089990020] |What would people recommend as a free (as in freedom) BitTorrent client on Linux? [131089990030] |Wikipedia has a Comparison of BitTorrent clients. [131089990040] |For the record, I'm currently using QBittorrent, which I'm quite satisfied with, as of recent versions, which are nicely featured and stable. [131089990050] |I'm currently running 2.6.7 on Debian Squeeze. [131089990060] |The version on lenny (1.1) was a bit dodgy, but the project was quite young then. [131089990070] |However, it does not hurt to learn what else is out there. [131089990080] |I'm quite partial to Python software, if any Python BitTorrent clients exist. [131089990090] |QBittorrent is written in C++. [131089990100] |Many of the most popular BitTorrent clients are proprietary, judging by the Wikipedia page. [131089990110] |EDIT: Thanks for all the recommendations. [131089990120] |If the recommenders would like to explain why they like their preferred clients, I'd be happy to hear it. [131089990130] |I know very little about what makes a bittorrent client good and would be glad to be educated. [131089990140] |Also it would be nice if something could be said about Linux distribution support and possibly other OS's like FreeBSD and OS X, though I only use Debian and am unlikely to change. [131090000010] |I've used ktorrent, and been quite happy with it. [131090010010] |For me it is Transmission, simple, light and fast. [131090010020] |But if you want a more friendly UI, you can check Vuze. [131090020010] |I'd personally concur with @sahid's recommendation of transmission, but if your heart is absolutely set on a python application then Deluge is more than worth a try. [131090030010] |rTorrent is quite good as well. [131090030020] |It's CLI based client. [131090030030] |But has great features. [131090040010] |For the next generation of decentralized torrent software, check out tribler. [131090040020] |Oh, and yes, it's Python :) [131090040030] |+1 for rTorrent and Transmission, too, depending on your use case. [131090050010] |QBittorrent is the only one I know of which offers a built in tracker ... ie. if you want to personally use bittorrent to share a file amongst a known group and not make it a public torrent. you can become the tracker... [131090050020] |And of course, it is a normal bittorrent client also.. [131090050030] |The buitlt-in tracker is simply a secondary feature. [131090060010] |for spot usages I've found Deluge very good. [131090060020] |If you plan to have a remote machine I'll suggest azureus (vuze) with the http remote interface. [131090060030] |Very simple, intuitive, stable and gets the jobs done. [131090070010] |Please recommend a GUI telnet client [131090070020] |I'm looking for a simple GUI telnet client. [131090070030] |I only ever used the CLI one, simply named telnet. [131090080010] |I'm baffled as to why you'd need one, but PuTTY comes with a linux client. [131090080020] |It's open source, exists in the Debian repositories, and as an added bonus speaks SSH as well. [131090090010] |Rolling back a file [131090090020] |I have a situation, a colleage of mine overwrote PHP files that I've made changes to , is it possible to roll back a file to a previous working version using the command line. [131090090030] |No svn repositories are available and also not any backups. [131090100010] |I'm sorry to say that if you didn't make any backups, you're almost certainly SOL here, especially from the command line. [131090100020] |Unlinking (deleting) a file can sometimes leave the data recoverable as long as nothing else grabs that particular inode; editing a file overwrites the data. [131090100030] |If your colleague still has the editor he was working in open and its undo buffer is long enough, that might be a way to recover the original, but beyond that I'm afraid you're sunk. [131090100040] |Sorry. [131090110010] |Assuming you're using ext3, it might be possible to recover it if the replacement file was created as another inode (instead of overwriting the existing file), by using debugfs on the unmounted filesystem, and to find the inode of the original file. [131090110020] |Unfortunately, if your colleage overwrite the file, rather than moving it aside and then deleting it, it's gone. [131090110030] |I would suggest using debugfs with extreme caution, because you can seriously mess up a filesystem. [131090110040] |It's use is only really for a last-ditch effort. [131090120010] |Usually you would have a tmp file created which begin with ~filename.extension. [131090120020] |You can recover from that. [131090120030] |Hope this helps [131090130010] |Display command in xterm titlebar [131090130020] |My Bash Prompt is currently setting the xterm titlebar using the following sequence: [131090130030] |Is there an easy way to display the current command in the titlebar. [131090130040] |For example, if I am tailing a file using tail -f foo.log, I want my titlebar to say tail -f foo.log. [131090140010] |(Inspired by this SU answer) [131090140020] |You can combine a couple bash tricks: [131090140030] |
  • If you trap a DEBUG signal, the handler is called before each command is executed
  • [131090140040] |
  • The variable $BASH_COMMAND holds the currently executing command
  • [131090140050] |So, trap DEBUG and have the handler set the title to $BASH_COMMAND: [131090140060] |This will keep the title changed until something else changes it, but as long as your $PS1 stays the same it won't be a problem -- you start a command, the DEBUG handler changes the titlebar, and when the command finishes bash draws a new prompt and resets your titlebar again. [131090140070] |A useful tip found here (also where that SU answer came from) is to include: [131090140080] |This will make bash propagate the DEBUG trap to any subshells you start; otherwise the titlebar won't be changed in them [131090150010] |basically, you need: [131090150020] |at the end of your .bashrc or similar. [131090150030] |Took me a while to work this out -- see my answer here for more information :) [131090160010] |Kill child-parent processes in a single command [131090160020] |Hello, [131090160030] |I connect to Internet using sudo wvdial on Fedora 14. [131090160040] |The terminal needs to be kept working. [131090160050] |My requirement is to run yum update in a separate terminal, then kill wvdial &its parent terminal and do init 0 in a single command using su -c. [131090160060] |Is there a way to kill child (here, sudo wvdial) and parent (here, terminal running wvdial) with a single command which can let me do the following? [131090160070] |Here kill-child-parent-processes signifies the method using which I can kill sudo wvdial and its parent terminal. [131090160080] |Thanks. [131090170010] |You need to find out the session ID (sid) of the shell running in the terminal. [131090170020] |(Pedantry alert: usually this is the same as $$. [131090170030] |If it's different then this may not work.) [131090170040] |You can then use this to kill the session running in the terminal. [131090170050] |You can't kill the terminal directly this way (it's in the window manager's session), but if the terminal is set to auto-close (as it usually is) then it will go away by itself. [131090180010] |How can I compile unclutter to my embedded linux? (newbie) [131090180020] |I got the source code of unclutter using apt-get source unclutter, and copied the files to my embedded system. [131090180030] |Now, how can I compile it? [131090180040] |--update [131090180050] |I've tried this answer: How to compile and install programs from source but it doesn't work here.. there's no ".configure" and make was not found. [131090190010] |In order to compile things on that system, it needs to have make, gcc, and a whole lot of other stuff that's not usually found on embedded devices. [131090190020] |Typically, you cross-compile it on another machine then put the binary on the embedded system. [131090190030] |You may be lucky enough to not have to compile it. [131090190040] |You can get the binary for your architecture and try running it on the system. [131090190050] |Cross compiling is a large topic, and there are lots of tools out there that try to make it easier. [131090190060] |Some things to search for: linaro, buildroot, crosstool. [131090190070] |To get the binary, go to packages.debian.org, search for the package that has the binary, download the appropriate one for your architecture (such as arm), open it with an archive manager, and look at the "data" folder - this will have the binaries. [131090190080] |It may turn out that the binary needs libraries that are also not installed - you can do the same process - find the package with the library you need, copy the binary over to the target system and try again. [131090200010] |I'm not sitting at a debian-based box just now, but I think the answer is to use apt-build, but if your system is small, you may not have it by default, and might not even be able to fit all the bits it depends upon. [131090210010] |Crontab syntax: using '*' for minutes value [131090210020] |What happens if I use '*' for the minutes value? [131090210030] |Is the command going to be run every minute? [131090210040] |For example: * 4 * * 0 [131090220010] |Yes, in your example, the command will run every minute of the 0400 hour on every Sunday. [131090220020] |By the way, if you do need something to run every minute, it's likely that you're monitoring something for changes; there is typically a better way of doing this. [131090220030] |For instance, on Linux there is inotify for waking programs based on filesystem events, and ip monitor for watching network status changes. [131090230010] |Yes, this job will run every minute. [131090230020] |Here are some relevant sections from man 5 crontab: [131090230030] |The manpage says that if you use the asterisk (*) in the minute field, this is is equivalent to using "0-59" (``first-last''), and the job will run every minute in the hour. [131090240010] |Yes this will run every minute of the fifth hour (0400 to 0459) on Sunday. [131090240020] |Likely what is intended is to run once during that period. Unless it needs to run on the hour, pick a random value from 1 to 59, and replace the minutes value. [131090240030] |If you have a bunch of programs that need to run in a particular period, you can limit load peaks by using a random minute in the hour you want the task to run. [131090240040] |Use a different value for each crontab entry. [131090250010] |How do I reuse the last output from the command line? [131090250020] |This is a noob question, but I'd like to know how to reuse the last output from the console, ie: [131090260010] |Assuming bash: [131090270010] |Try this: [131090280010] |All the other solutions involve modifying your workflow or running the command twice, which might not be suitable if it takes a long time to run, or is not repeatable (e.g. it deletes a file - rerunning it would produce a different result). [131090280020] |So here's a more complicated idea if you need it: [131090280030] |.bashrc [131090280040] |bash prompt [131090280050] |This has some issues, so it's just meant as a starting point. [131090280060] |For example, the output file (~/.out) might grow very large and fill up your disk. [131090280070] |Also, your whole shell could stop working if tee dies. [131090280080] |It could be modified to only capture the output from the previous command using preexec and precmd hooks in zsh, or an emulation of them in bash, but that's more complicated to describe here. [131090290010] |So, uh, here's an answer: [131090290020] |If you're running under X, select the output you want with the mouse to copy it, and then middle-click to paste it. [131090290030] |If you're running on a text console, you can do a similar thing with gpm. [131090300010] |Not yet mentioned, use a variable: [131090310010] |$ cd `python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"` [131090310020] |will do the trick. [131090310030] |Read here for more details: Command substitution. [131090320010] |If you realize you're going to want to reuse the output before you hit Enter, you can save it in a variable: add tmp=$( at the beginning of the line and ) at the end. [131090320020] |(This removes any blank line at the end of the command output, and in fact removes any final newline; this rarely matters.) [131090320030] |If your shell is ksh or zsh, here's a useful function you can use to make this more automatic. [131090320040] |(It's no help in bash because it requires the last command in a pipeline to be executed in the parent shell, which is only the case in ksh (not pdksh) and zsh.) [131090320050] |Use it this way: [131090330010] |(It's not a working answer, unfortunately, but still something curious. [131090330020] |Someone interested could well try to complete the implementation of the feature I'm going to tell you about.) [131090330030] |In eshell inside Emacs, they wanted to have such a feature but it's not implemented in a complete way (which is however reflected in the documentation). [131090330040] |For example: [131090330050] |You see, only the output of builtins can be captured into the $$ variable. [131090330060] |But well, some elisp programming (cf. eshell-mark-output implementation in "esh-mode.el"), and you could implement a function that "marks" the last output and returns it as the function's result; so that you can use that function in a eshell command you are asking for -- elisp functions can be used in eshell commands with the usual elisp syntax, i.e. in parentheses, like this: [131090340010] |A working draft for a traditional shell: [131090340020] |Now we can cat the screen to a file. [131090340030] |Needs sudo. [131090340040] |Apropos screendump: so named program doesn't work for me any more. [131090340050] |Maybe for older kernels only. /dev/pts/N didn't work for me too. [131090340060] |Maybe you have to some optional MKDEV in /dev - I remember darkly about some /dev/cuaN, but I may be wrong. [131090340070] |We would like to pipe the output instead of using screen.dump. [131090340080] |But somehow it doesn't work - sometimes it waits for ENTER. [131090340090] |The capturing isn't a normal textfile with linefeeds, but with - for example - 80x50 chars in one sequence. [131090340100] |To pick the last 2 lines, 1 for the output of the command, and one for the prompting line, I revert it, pick 160 chars, revert again and pick 80. [131090340110] |Just in case you ever wondered, why there is a rev program. [131090340120] |Critique: [131090340130] |
  • The first commands are entered, thus moving the line ahed. [131090340140] |Well - just a numerical excercise to pick the 3rd-last line or something. [131090340150] |I worked mainly in a different window.
  • [131090340160] |
  • Not everybody has a 80x50 screen. [131090340170] |Well, yes, we know. [131090340180] |There is $COLUMNS and $ROWS for your pleasure.
  • [131090340190] |
  • The output is not allways at the bottom. [131090340200] |A fresh and young shell might be in the upper rows. [131090340210] |Well - simple as that: Evaluate what shell is running. [131090340220] |Which prompt is used. [131090340230] |Do some prompt detection and find the last line with a shell-prompt. [131090340240] |The line before (or 2. before) should contain the directory.
  • [131090340250] |The first diagram is made with explain.py [131090350010] |(building up on 4485's answer) [131090350020] |That's lots of typing, so make an alias: [131090350030] |Then simply call cd $(python -c ... | tee2tty) [131090350040] |This of course requires you to already know what you want to do with the output but has the advantage of calling the command only once. [131090360010] |How to wget a file to a remote machine over SSH? [131090360020] |I'd like to basically pipe a wget command to a file on a remote server over SSH. [131090360030] |How can I do this? [131090360040] |I know I could simply ssh into the server and have it download the file, but I'd much rather use the local machine to download it and send it. [131090370010] |If I understand your question correctly (it's not clear what machine is supposed to do what), you are logged into some machine myclient, you have ssh access to another machine myserver, and you want to download a file over HTTP from a remove server www.example.com, with the requirement that the HTTP download must be performed to myclient but the data needs to be saved on myserver. [131090370020] |Then: [131090370030] |Another approach is to mount the remote server's filesystem over SSH, with sshfs. [131090370040] |This is too much hassle for a once-off need, but convenient if you do that sort of things often. [131090380010] |Linux kernel support for USB gamepads? [131090380020] |I've got an old Logitech USB gamepad that worked well under both Windows and Mac OS X. That is, the gamepad was totally "plug and play" for games run by a Super Nintendo emulator (SNES9X). [131090380030] |Does Linux support such gamepads out of the box? [131090380040] |Or any other controller for that matter? [131090380050] |Thanks. [131090390010] |Yes - they're recognised as an input device and you should be able to see information about it with "lsusb". [131090400010] |Compare files that are in directory 1 but not directory 2? [131090400020] |I'm having trouble with a bash script I want to make [131090400030] |I know ls will list files that are in a directory but I want it to list directories that are in directory1 but NOT in directory2, and then list files in directory2 that are NOT in directory1. [131090400040] |In a feeble attempt, i tried: [131090400050] |Quickly I realized why it didn't work. [131090400060] |Can anyone help out a total bash nerd out? [131090400070] |Thanks! [131090410010] |If you prefer a graphical tool, use [131090410020] |you might need to install it first. [131090410030] |Similar programs are [131090410040] |The midnight commander has a compare directories command build in, which works nice, if you don't go for subdirs. [131090420010] |And here a pure script. [131090420020] |These are the directories a and b: [131090420030] |Here is the command: [131090420040] |output: [131090420050] |Swap a and b for files in a but not in b. The ! is a negation. -e tests for -existance. [131090420060] |In prosa: "Test if not exists the file found in a in ../b". [131090420070] |Note: You have to dive into a first, to get names without 'a'. [131090420080] |For the second comparison you have to cd ../b. [131090430010] |Given bash, this might be easiest as [131090430020] |The <(command) expression runs command over a pipe and substitutes a /dev/fd reference: [131090430030] |So the command above runs ls -a on each directory and feeds their outputs as file arguments to comm, which outputs up to 3 columns, tab-indented: entries only in the first, entries in both, entries in the second. [131090430040] |(That is, if it's in both then it is indented by a tab, if it's only in the second it's indented by 2 tabs.) [131090430050] |You can also suppress columns by number: comm -1 foo bar displays only the lines in both and lines in the second file, the latter indented by one tab. [131090430060] |(This is most commonly used by suppressing all but the column you want: comm -13 foo bar shows only the lines in common.) [131090430070] |Given that you want those in the first directory, that translates to [131090430080] |If you need more than just whether it is present, use diff -r, which will output diffs for the files in common and a one-line message for files found only in one or the other. [131090440010] |You could use the neglected join command. [131090440020] |Here some setup for two example directories, d1/ and d2/, each of which has some files with names unique to the directory, and some files with names in common with the other directory. [131090440030] |This is only an example, so I used single-letter file names to illustrate file names unique to one or the other, and file names in common. [131090440040] |For me, it shows files like this: [131090440050] |UPDATE: You'd want to do different things in real life to accomodate files with whitespace in them, as join uses the first field "delimited by whitespace" to decide which lines are uniqe and which lines are common. [131090450010] |You can use find and awk to solve this. [131090450020] |With following layout: [131090450030] |Part one: [131090450040] |Part two: [131090450050] |This compares to a comm solution: [131090450060] |And to a join solution: [131090460010] |USB storage devices aren't automatically mounted when inserted on a fresh install of Debian 6.0. [131090460020] |I installed Debian 6.0 (squeeze) a few days ago on my machine. [131090460030] |I installed the default GNOME desktop, standard settings. [131090460040] |Unfortunately, I just noticed that when I plug in USB storage devices (external hard drivers, USB sticks, etc.), they don't get automatically mounted, like they used to (and presumably still should). [131090460050] |I noticed that the usb-storage module wasn't loaded automatically, either, so no device nodes were getting created either. [131090460060] |So, I loaded that module, so at least now the device nodes get created automatically, it's just a case of mounting them manually. [131090460070] |But that's not the point! [131090460080] |In nautilus's preferences, I have "Browse media when inserted" checked, (i.e., the default), but just nothing in the UI happens when I insert something. [131090460090] |The device never appears in the Computer view. [131090460100] |Watching the kernel logs shows that the insertions are definitely being registered, and after manually loading usb-storage first (what is that about? Why isn't that happening automatically?), device nodes get created, but that's it. [131090460110] |So. [131090460120] |My question is, from here, how do I go about finding out what's wrong? [131090470010] |Run gconf-editor and check the mark against "/desktop/gnome/volume_manager/automount_drives" [131090470020] |It should do the work. [131090480010] |I've fixed this now. [131090480020] |Firstly, I didn't install squeeze with the default GNOME desktop environment. [131090480030] |I mis-remembered. [131090480040] |What I did was install the base system, and then install the gnome-desktop-environment package, aiming for a lighter/closer-to-upstream set of packages. [131090480050] |Now, completely unscientifically, I did two things in one go to fix this, so I don't know for sure which did it. [131090480060] |I decided to install Debian's GNOME desktop task (tasksel install gnome-desktop --new-install). [131090480070] |I thought that maybe, somehow, gnome-desktop-environment didn't pull in the right packages to enable this functionality. [131090480080] |In retrospect, I think that's unlikely. [131090480090] |Next, I restarted the machine. [131090480100] |I was sure I had restarted the machine after installing gnome-desktop-environment, but now I think I hadn't. [131090480110] |So presumably, some services weren't working properly, or something like that. [131090480120] |After restarting, things worked exactly as I wanted - USB storage devices were being mounted automatically, at /media/, which is great. [131090490010] |How Can I get an internship [131090490020] |I am master student in computer science. [131090490030] |I will graduate in May. [131090490040] |It's exhausted for me to get a software engineer job. [131090490050] |I want to start as internship. [131090490060] |How can I get an internship? [131090490070] |Any kind person can refer me a Internship about Software Engineer. [131090490080] |Let me try a interview. [131090490090] |List part of my CV [131090490100] |Proficient in C++, C, and Java, [131090490110] |All helps really appreciated! [131090490120] |Please check my profile for my email. [131090490130] |I can send you my CV privately. [131090490140] |Luke. [131090500010] |What is the benefit of compiling your own linux kernel? [131090500020] |What benefit could I see by compiling a Linux kernel myself? [131090500030] |Is there some efficiency you could create by customizing it to your hardware? [131090510010] |Compiling the kernel yourself allows you to only include the parts relevant to your computer, which makes it smaller and potentially faster, especially at boot time. [131090510020] |Generic kernels need to include support for as much hardware as possible; at boot time they detect which hardware is attached to your computer and loads the appropriate modules, but it takes time to do all that and they need to load dynamic modules, rather than having the code baked directly into the kernel. [131090510030] |There's no reason to have your kernel support 400 different CPUs when there's only one in your computer, or to support bluetooth mice if you don't have one, it's all wasted space you can free up [131090520010] |In my mind, the only benefit you really get from compiling your own linux kernel is: [131090520020] |You learn how to compile your own linux kernel. [131090520030] |It's not something you need to do for more speed / memory / xxx whatever. [131090520040] |It is a valuable thing to do if that's the stage you feel you are at in your development. [131090520050] |If you want to have a deeper understanding of what this whole "open source" thing is about, about how and what the different parts of the kernel are, then you should give it a go. [131090520060] |If you are just looking to speed up your boot time by 3 seconds, then... what's the point... go buy an ssd. [131090520070] |If you are curious, if you want to learn, then compiling your own kernel is a great idea and you will likely get a lot out of it. [131090530010] |Bragging rights :) [131090540010] |I second gabe.'s answer (my comment is too long so I'm posting as an answer). [131090540020] |Unless you have a highly specialized purpose (e.g. embedded machines, strict security profiling), I see no practical benefit to compiling your own kernel other than to see how it's done. [131090540030] |By methodically reviewing the options, seeing how they interact with each other to build the system is a great way to understand how your system works. [131090540040] |It's amazing what you find out when you try to remove components that don't appear to have any purpose for the tasks you're trying to accomplish. [131090540050] |Be warned however--why jumping down the rabbit hole is undoubtedly exhilarating, it will suck back more nights and weekends than you thought possible! [131090550010] |Most users do not need to compile their own kernel, their distribution has done this work for them. [131090550020] |Usually distributions will include a set of patches to either integrate with certain parts of the way the distribution works, backports of device drivers and fixes from newer, but unreleased versions of the kernel or features that they are pioneering with their users. [131090550030] |When you compile your own kernel, you have a couple of options, you can compile an official Linus Torvalds kernel, this will not include any of the patches or customizations that were added by your distribution (which can be good or bad) or you can use your distribution rebuild tool to build your own kernel. [131090550040] |The reasons you might want to rebuild your kernel include: [131090550050] |
  • Patching bugs or adding a specific feature to a production system, where you can not really risk upgrading the whole kernel for a single fix or two.
  • [131090550060] |
  • To try out a particular device driver, or a new feature
  • [131090550070] |
  • To extend the kernel, work on it
  • [131090550080] |
  • testing some of the "Alpha" modules or features.
  • [131090550090] |Many developers use it to create also custom versions of the kernel for embedded systems or settop boxes where they need special device drivers, or they want to remove functionality that they do not need. [131090560010] |Compiling your own kernel allows you to participate in the kernel development process, whether that is simple stuff such as supplying PCI/USB device IDs for an existing driver that may make a newer device work for you, to getting deeply involved in the fray of core kernel development. [131090560020] |It also allows you to test development kernels on your hardware and provide feedback if you notice any regressions. [131090560030] |This can be particularly helpful to you and others if you have an uncommon piece of hardware. [131090560040] |If you wait for a distro kernel, it can take some time for fixes from your problem reports to filter into a new distro kernel release. [131090560050] |I also personally like to compile my own kernels to include support for only the hardware that I have. [131090560060] |When you run distro kernels and look at the output of lsmod(8), you see lots of modules loaded for hardware you don't have. [131090560070] |This can pollute the module list, /proc, /sys and your logs such that when you're searching for something it can get hidden amongst the noise; You also cannot be 100% sure that those modules are not contributing to a problem you are trying to diagnose. [131090570010] |For most uses generic kernels are good for virtually any hardware. [131090570020] |Additionally they usually contain(ed) distribution-specific patches so compiling your own kernel may (might) cause problems. [131090570030] |The reson to compile your own kernel are: [131090570040] |
  • You are using source-based distro so there is no 'generic' kernel
  • [131090570050] |
  • You are kernel developer and you develop kernel
  • [131090570060] |
  • You must customize kernel for example for embedded device with very limited hard drive
  • [131090570070] |
  • Some driver is not compiled in (very rare case)
  • [131090570080] |
  • You want to patch kernel AND you know what you are doing
  • [131090570090] |
  • You want to learn how to compile kernel
  • [131090570100] |If I wasn't using source-based distro I wouldn't compile kernel at all. [131090580010] |At work, we use hand-rolled kernels in order to apply out-of-tree patches such as vserver and unionfs. [131090580020] |At home, I am compiling hand-rolled kernels in order to find which commit introduced a bug I am experiencing. [131090580030] |Once I've finished that, I will probably stick to a hand-rolled kernel until the bug is fixed in my distribution (Debian), at which point I would revert to their kernels again. [131090590010] |If you want to install Linux on very specific hardware, say more exotic than a DS, you will have to cross-compile your own kernel. [131090600010] |I can't believe the accepted answer here starts out saying "It's not something you need to do for more speed / memory / xxx whatever." [131090600020] |This is totally false. [131090600030] |I routinely custom build my Kernels to both remove unneeded code as well as including performance enhancing code mostly related to hardware. [131090600040] |For example, I run some older hardware and can eek out some performance gains by enabling rarely enabled Kernel drivers such as HPT36x chipset support on some older MoBos that have this built-in. [131090600050] |Another example, BIG SMP under Slackware is the default and on a Dell 2800, for example, will consume a sizeable foot print to run things like GFSD (not as a kernel module) that, also by the way, consumes CPU ticks for something I don't need. [131090600060] |Likewise for NFSD and other catch-all to please all mentalities which is fine if you're just trying to get a Linux on a box and running but if you do care about "speed / memory / xxx whatever" then these things matter and work. [131090600070] |All my production boxes are custom Kernels. [131090600080] |If I'm on common hardware such as a Dell series (2800, 2850, 2900, etc...) hardware, it's simple matter of copying the kernel's .config file around to each box and compiling the kernel and installing. [131090610010] |Here are some situations where compiling your own kernel will benefit you: [131090610020] |
  • A kernel with module loading disabled is more secure. [131090610030] |This will require you to select the modules you know you need and include them as part of the kernel, as opposed to compiling them as modules.
  • [131090610040] |
  • Disabling support for /dev/kmem, or crippling it with the appropriate compiler option is a good thing for security. [131090610050] |I think most distros do this by default now.
  • [131090610060] |
  • I prefer not to use initrd's when possible. [131090610070] |Customizing your kernel to the hardware it boots from eliminates the initrd.
  • [131090610080] |
  • Sometimes a later kernel version will have features you need, but this is very rare today. [131090610090] |I remember when I first started using Debian, it was using 2.4 kernels, but I needed a 2.6 kernel for udev support.
  • [131090610100] |
  • Disabling networking protocols/options you don't need can speed up your TCP/IP performance.
  • [131090610110] |
  • Disabling options you don't need lowers the memory footprint of the kernel, which is important in low RAM environments. [131090610120] |When you are using a 256MB RAM system as a router, this helps.
  • [131090610130] |
  • I find all the "tty" devices in /dev annoying on systems where I generally only log in via serial or ssh.
  • [131090620010] |Skills required for a good Linux job [131090620020] |I am working as an IT Engineer in a reputed company in India. [131090620030] |The problem is that though I was told that I would be given work on Linux, I am made to do work on Java and Windows. [131090620040] |I am uncomfortable with Java and hate Windows. [131090620050] |I have started learning Python by myself but it's tough to give it ample time due to ongoing job. [131090620060] |Frankly, I am not an expert coder. [131090620070] |I tried a lot to get into Linux kernel development during my college days but realized that I am not that good a coder. [131090620080] |So I decided to do RHCE and go for server management. [131090620090] |What I want to know is that what skill set is required to get a job in Linux projects. [131090620100] |In August 2011, I am planning to take a break from job if this company doesn't give me a good Linux project. [131090620110] |What skills shall I acquire in order to get a good Linux job. [131090620120] |One thing that I've decided to do during that break is to pursue RHCE. [131090620130] |After reading first of the set of three RHCE course books, I am confident that I can sail through it. [131090620140] |Inputs from experts on this site are highly invaluable. [131090620150] |My technical interests at the moment are - Python Programming, C/C++ programming, Linux Server Management and Cloud Computing. [131090620160] |But the college degree that I have is by no means sufficient to get into some good company. [131090620170] |The practical knowledge I have is not of an expert level. [131090620180] |And the job experience I have is simply pathetic. [131090620190] |PS - I am extremely frustrated in my current job. [131090620200] |Though I think there's barely any need to mention it. [131090630010] |This has been suggested numerous times before in this context, but... [131090630020] |I'd suggest getting some experience in a free software project. [131090630030] |This looks good on your resume, is valuable experience working with good people, and is useful for contacts. [131090630040] |People regularly get jobs through free software projects. [131090630050] |My impression (which may be incorrect) is also that it is not common for Indians to involve themselves in free software projects, and if true, that would help you stand out. [131090630060] |You say you are interested in Python. [131090630070] |There are lots of free software projects involving Python, with various levels of barrier to entry. [131090630080] |One that I am familar with is Mercurial, where the barrier to entry is not too high, the community is friendly, the programmers are talented, and there are opportunities for participation. [131090630090] |And everybody uses version control. [131090630100] |You could pick up some small bite-sized bug and/or wishlist feature and work on it. [131090630110] |Other projects off the top of my head are Django, Pylons, Sqlalchemy, though I think Mercurial is as good or better than any of these from the POV of opportunity for participation. [131090630120] |Another possibility is Linux community distribution work, eg. with Debian, which will also give you the opportunity of working with talented people. [131090630130] |Also good for making contacts etc. [131090630140] |Also, if you are interested in C++, the apt and aptitude projects in Debian are important and severely undermanned. [131090630150] |In general, most free software projects don't have enough manpower, particularly the smaller ones, and are eager for assistance. [131090630160] |HTH. [131090640010] |You have several paths that offer different job opportunities [131090640020] |
  • web based stuff
  • [131090640030] |
  • native projects
  • [131090640040] |
  • cross platform development
  • [131090640050] |
  • porting to Linux
  • [131090640060] |In general be prepared to use other Unixes along with Linux (although Linux is totally dominating right now). [131090640070] |Web based [131090640080] |Pretty much anything web based that doesn't use .NET is Linux stuff (or cross-platform). [131090640090] |You can concentrate on any of the widely used languages: PHP, Python, Perl, Ruby [131090640100] |Native projects [131090640110] |These are mostly open source or high-performance computing jobs. [131090640120] |In Europe it is kind of common to hire a full-time programmer to modify an open-source project (and provide support) instead of paying insane licensing fees for commercial product that won't fit anyway. [131090640130] |The high-performance area is sort of Linux only right now, therefore jobs in this area will most likely lead to Linux. [131090640140] |This area is very C heavy, with a little bit of C++ and lot of Java. [131090640150] |Cross platform development [131090640160] |Kind of odd area. [131090640170] |There are some companies that provide cross-platform software, some have special teams for specific platforms, some have cross-platform teams. [131090640180] |But many companies simply use Java (not that it helps much). [131090640190] |Porting to Linux [131090640200] |These jobs do pop up from time to time. [131090640210] |Some company sees an open market and decides to expand. [131090640220] |I personally would run away from such jobs. [131090650010] |One project that has a very high number of coders doing paid work is GNOME. [131090650020] |If you go there and do amazing work, you are mostly likely going to be approached by someone. [131090650030] |The greatest of these is likely Red Hat (Fedora), and others I've seen include Nokia, Canonical (Ubuntu), Novell (SuSE), a bunch of start-ups, or even the GNOME foundation. [131090650040] |Oh, and there's a heck of a lot of Python usage in the project, and the LOC count is probably second only to C. [131090650050] |The project's planet is normally where these 'GNOME companies' report their deeds. [131090650060] |Have a look. [131090660010] |Hi [131090660020] |I am made to do work on Java and Windows. [131090660030] |Good news, it sounds like you are writing/working with code at least. [131090660040] |If I was in your shoes I think I would take a pragmatic approach, and learn how to port that Java app to Linux (when the boss is not looking). [131090660050] |It's a good exercise to make a application portable. [131090660060] |I am uncomfortable with Java [131090660070] |Don't be, in the Linux world you use the best language for the task. [131090660080] |(Best free language at least). [131090660090] |Therefore you need to make sure that you are comfortable with Java, C++/Qt, C, python, php, perl, etc etc [131090670010] |What is excessive swapping. [131090670020] |This post led me to ask that question. [131090670030] |Cache contention [131090670040] |On a large site, if you are using MyISAM, contention occurs in the database tables when the cache is forced to clear after a node or a comment is added. [131090670050] |With tens of thousands of filter text snippets needing to be deleted, the table will be locked for a long period, and any accesses to it will be queued pending the purge of the data in it. [131090670060] |The same is true for the page cache as well. [131090670070] |This often causes a "site hang" for a minute or two. [131090670080] |During that time new requests keep piling up, and if you do not have the MaxClients parameter in Apache setup correctly, the system can go into thrashing because of excessive swapping. [131090680010] |Swapping is what happens when the programs you're running allocate more memory than your machine's physical RAM. [131090680020] |Some of the data is swapped out to disk, and swapped back in when it's needed. [131090680030] |This is ok as long as it doesn't happen too often. [131090680040] |Disk access is much slower than RAM access, so every time the computer has to swap, the program that caused it will have to pause while the data is transferred. [131090680050] |Excessive swapping (also called thrashing) is what happens when the amount of memory the programs are actively using exceeds the amount of physical memory. [131090680060] |The computer spends all its time swapping data between disk and RAM, and little time actually accomplishing any work. [131090680070] |In the article you mention, the problem was that too many Apache processes were forked. [131090680080] |Each process requires some memory, and if MaxClients is set too high, Apache will keep forking child processes until it brings the system to a crawl.