[131012370010] |Colors in Man Pages [131012370020] |When I look at a man page in my 'console' (not an xterm) I see some coloration, but I don't get this in my xterm's (e.g. konsole) is there any way I can enable this? hopefully a fairly simple solution? [131012380010] |I think this might be helpful. [131012390010] |link should be help full [131012400010] |You need to use the termcap(5) feature. [131012400020] |The man page on some Unices says this tool is obsolete and to use 'terminfo', but it's still available on others, and terminfo is more complicated. [131012400030] |I do the following so that less and man (which uses less) will have color: [131012400040] |And then in my ~/.bashrc, I do this: [131012410010] |export PAGER=most [131012420010] |Stochastic version of seq for generating sequence of random number/words? [131012420020] |Some time ago I used a seq-like tool for printing a sequence of pseudo random generated number to stdout. [131012420030] |You could specify a range, seed and the number of samples and much more. [131012420040] |I've just forgotten the name of this tool! [131012420050] |Does anyone can help me out? [131012420060] |Perhaps you know even a more advanced tool? [131012420070] |That, for example, supports different probability distributions or even the generation of a sequence of random words under different alphabets and length/character distributions. [131012430010] |Do you mean jot? [131012430020] |If you use Ubuntu the package is athena-jot. [131012430030] |A simple example: [131012430040] |Bye. [131012440010] |If you don't mind writing a small script to do what you need, I'd recommend doing it in R, the open-source statistics system. [131012440020] |For instance, consider this one-liner to get a list of 100 Gaussian-distributed numbers: [131012440030] |Let's break this down. [131012440040] |The standard R command brings you into an interactive programming environment, which is fine if you're trying to work out how to do something by hand or are building something up incrementally, but from your question, it sounds like you just need a list of numbers to send to another program. [131012440050] |So instead, we use Rscript, which behaves more like a traditional Unix script interpreter: you can pass it the name of a file containing an R script, or use the standard -e flag to pass the entire program text on the command line. [131012440060] |rnorm() is the R function to get a list of random numbers with the "normal" or Gaussian distribution. [131012440070] |It takes up to three parameters, only the first of which is required, how many numbers you want. [131012440080] |We've asked for 100. [131012440090] |By taking the defaults for the other two optional parameters, we get a mean of 0 and a standard deviation of 1. [131012440100] |The arithmetic after that is just showing off a cool feature of the R language: you can do arithmetic on whole data tables, matrices, etc., just as easily as a scalar value in a more typical language. [131012440110] |I've multiplied all generated values by 100 and added 100 to them, just because I can. [131012440120] |Because R is a full-fledged programming language, there's no limit to the things you could do with this list of numbers. [131012440130] |That's the advantage of using such a system instead of a fixed-purpose command like jot. [131012440140] |We pass the result of that previous operation to the write() function, which writes the data out to a file by default, but we've overridden that by passing a blank string for the second parameter, the file name, so it writes the table out to the terminal instead. [131012440150] |The next parameter, 1, just tells it we want our output in single-column format. [131012440160] |R has many other random number generation functions built into the base system. [131012440170] |For instance, we can mimic the jot command in lcpriani's answer with this script: [131012440180] |Here we're using runif() to get 10 uniformly-distributed random numbers from 12 to 27. [131012440190] |Like rnorm(), which we used above, this function returns floating-point values, so we have to round() them to their nearest integer values before writing them to the screen. [131012440200] |R also has a rich set of add-ons in CRAN, a package repository modeled on Perl's CPAN. [131012440210] |One you might be interested in is simply called random, which acts as an interface to random.org, a service that returns true random numbers generated from atmospheric noise. [131012440220] |R is a complete programming environment, so it may be that you don't actually need to get your numbers out of R in text format. [131012440230] |You might be able to solve your problem entirely in R. Give it a shot. [131012450010] |multigateway routing for specific src port [131012450020] |I have two gateway to access internet, somehow I want to load balancing it, so far its working, but some connection or service need persistent gateway IP to be used, so the client should never change its gateway once it have connected to dest., my current implementation seems to be round-robin or whatever it is. [131012450030] |this is my iproute [131012450040] |now i want to fix it somehow the gateway the client will use is predetermined, for example by using source port, if the source port is even number we use gw.1 and odd number go through gw.1, can we do that using ip route? [131012450050] |*note that I only have one outbound interface : eth0 here. [131012460010] |Use policy routing with marking packets. [131012460020] |I'm not sure what format of configuration it is but you should check it in your distro. [131012460030] |In commandline it should look like (not tested but should work) [131012460040] |Edit: lines [131012460050] |Names routing table 200 by name "ssh". [131012460060] |It is preserved between boots. [131012470010] |Adding a line of text to multiple files [131012470020] |So, I have a bunch of files in a directory, and I need to insert a line of text into each of them. [131012470030] |They have essentially the following format: [131012470040] |And I'd like to insert a line before the closing [131012470050] | [131012470060] |tag. [131012470070] |My first assumption is that I should be able to do this with sed, probably matching and replacing that tag. [131012470080] |I'm going to start attempting this now, but if anyone has a existing way to do this, I'd love to hear it. [131012480010] |Something like this: [131012480020] |works. [131012480030] |If you want to repeat this command on a lot of file you can do something like: [131012490010] |If you don't mind Perl try: [131012490020] |The -i switch will save your old file with a .old extension and print to the current one. [131012500010] |You can also use "ex" (command-line vi) if the editing you want to do is even somewhat complicated. [131012500020] |For example, you only want to do the insert for on one instance of "". [131012500030] |

A shell script like this can work:

[131012500040] |This approach gives you the advantages of "ex": finding a location with elaboarte patterns, and 'cursor movements'. [131012500050] |You can do things like find a pattern, then find the next instance, THEN do the insert. [131012500060] |Or you can change text, rather than just doing inserts. [131012500070] |Or you can change between ranges. [131012500080] |Don't forget that "ex" lets you use "." as the current line, so .,/^somepatter/s/blah/foo/ will work. [131012510010] |Which laptop is most compatible with Linux? [131012510020] |I've always been unlucky with regards to choosing a laptop that I can install Linux on. [131012510030] |If it's not the wireless card that's not working out of the box, it's the video card. [131012510040] |Also, I'm still not able to hibernate my computer, close the lid and resume where I left off at a later point. [131012510050] |I always have to shut down the laptop or leave it on. [131012510060] |Is there a laptop vendor that is considered to have the best trade off between performance and compatibility with Linux? [131012510070] |If not, then what should I look for when buying a laptop? [131012520010] |I'd suggest buying one with Linux preinstalled so you know the hardware is compatible. [131012520020] |Dell still sells some, though not sure if you can still use the web interface... [131012520030] |Rumor has it you have to call now... [131012530010] |You can check on this site: http://www.linux-laptop.net/ [131012530020] |You look a lot of laptops and their level of "compatibility" with distributions of GNU/Linux that other user have tested, maybe you can find something that you like there. [131012540010] |There is no best laptop for Linux, because this heavily depends on your usage. [131012540020] |I'd recommend getting a laptop with Linux preinstalled. [131012540030] |Other than that I can safely say that an Aspire Timeline 1810T runs Linux very well, but is a only subnotebook. [131012550010] |My experience is mostly with Dell latitude series of laptops. [131012550020] |Looking for Linux compatibility, their actual series is a go, and, on Fedora, they work with all the power saving features (suspend, resume, disk spinning...) [131012550030] |I am not biased, but Intel hardware (Centrino brands, Core2 Duo, new Core i3, i5 and i7) are good to go, mostly because all of the hardware drivers are open source and kernel included (apart for firmware blobs of wifi cards), so they are a safe bet. [131012550040] |The same for netbooks (apart from the Poulsbo graphic adapter). [131012550050] |For a safe order, the Latitude or Vostro series with an intel wifi adapter should be ok. [131012560010] |I'm not sure what issues you're constantly experiencing but I run Gentoo on Lenovo Thinkpad without problems (fingerprint reader does not work) - with possible problems with removal of BKL in recent kernels (however 2.6.33 worked ok). [131012560020] |Previously I used IBM Thinkpad. [131012560030] |From my small experience with them: [131012560040] |
  • Thinkpads seems to have a community which helps configuring them (IRC channel, website).
  • [131012560050] |
  • Unless you need high-performace of graphics use intel. [131012560060] |I had much trouble with getting ATI card (XPress 200M) to work (basic OpenGL was ok but there were problems with KMS - at least some time ago)
  • [131012560070] |
  • Don't trust windows recovery tool. [131012560080] |Back up the position of partitions - it said it won't change partions other then C: but it deleted first secondary partition (/dev/sda5). [131012560090] |Strangely grub was left on its place and data was undamaged (fortunatly I could reverse-engeneer the positions).
  • [131012560100] |In addition to recommending linux laptops I can recomend Thinkpads (you asked) - I didn't use many other laptops but they worked. [131012570010] |The problem with notebooks with Linux preinstalled, is which distro comes with them. [131012570020] |I've bought one which had an unknown distro (Satux), that was Debian based, but included proprietary drivers and no access to the sources for the distro or drivers. [131012570030] |When I finally decided to install Ubuntu over it, I started to have to chase drivers all around the internet, for the video adapter, and wi-fi (power-saving seems to be working). [131012570040] |But the wi-fi could not work reliably so I disabled the internal adapter and bought an external USB wi-fi dongle that works nicely (but with manually recompiled drivers, too). [131012580010] |Always have a live USB/optical disk of your favorite OS handy, so that if you come across some laptop, you can boot off it and see how it fares. [131012580020] |Ensure that you test problematic stuff like WiFi, 3G, and GPU (and maybe Bluetooth). [131012590010] |The most light-weight Compiz compatible environment? [131012590020] |I require Compiz for organizational reasons; I make heavy use of its window sorting features. [131012590030] |But I like light-weight environments so that I can give as much of my system's resources to the applications as possible. [131012590040] |Is there a DE that plays nice with Compiz, but is still very light? [131012600010] |Do you actually require a DE at all? [131012600020] |I've run Compiz without one before, there's no real reason it would require one, and that's about as light-weight as you can get [131012610010] |I'm running Compiz standalone right now. [131012610020] |Works just fine. [131012610030] |I followed the instructions on the Arch Linux Compiz wiki page; see especially the section As a Standalone Window Manager. [131012620010] |Scheduling command/script by specifying the exact second [131012620020] |I'm using bash and wondering how I can execute a command/script at a later specified time, with the accuracy of a second? [131012620030] |I read the man page for the at-command, but as far as I could understand it's only possible specify minutes (and not seconds). [131012620040] |Right now I first use at and then sleep to get it to execute the right second. [131012620050] |To demonstrate, if I would like to run my_script.sh at 22.21.05 I would do: [131012620060] |But it would be much nicer to have a command with it built in. Something like at 22.21.05. [131012630010] |I don't believe that this is possible. cron is only granular down to the minute and the at utility hooks into it to do it's work. [131012630020] |I think you have the right solution--sleeping for x seconds before executing. [131012640010] |You could make your technique into a function: [131012640020] |Try it out: [131012640030] |You should be aware, however, that precision is not guaranteed. [131012650010] |preventing mounting some partition by user (gnome+udisk) [131012650020] |I'd like to prevent automounting one partition (known UUID etc.) on one external harddrive. [131012650030] |Others should be automounted. [131012650040] |Distro is 'modern'(read uses lots of beta) and uses udisk+udev for mounting. [131012660010] |One possibility is to add your own udev rule for this partition, that overrides the default ones. [131012660020] |On Ubuntu 10.04 /lib/udev/rules.d/80-udisks.rules has some default rules that make udisks ignore some partitions (e.g. partitions that are known to be rescue partitions etc.) which might be an inspiration... [131012660030] |On Ubuntu 10.04 your own rules should go into /etc/udev/rules.d/ (see the README there). [131012660040] |After some experimenting, the following seems to work: [131012660050] |Put that line in a *.rules file that has a name that lexically follows the rules file that contains the normal udisk-related rules. [131012660060] |Easiest to do that is to start it with a higher number (so I used 81 to make sure it overrides the rules in 80-*). [131012660070] |Of course use whatever UUID your partition has. [131012660080] |On another distro those things might be located differently, but the basics should be the same... [131012670010] |Sed and awk question [131012670020] |I'm trying to use sed or awk to replace 5 lines in a smb file but I just don't have any idea how to deal with the newlines. [131012680010] |Sed is quite bad at this, because it operates one line at a time. [131012680020] |The only decent technique I've ever seen to do this is this one, which involves storing the entire file in sed's hold buffer and then operating on it all at once: [131012680030] |If you can, it's much easier to use perl to accomplish this: [131012680040] |search can contain \ns to represent newlines [131012690010] |Accessing the iPhone file system via Linux/Unix [131012690020] |My old iPhone is semi-bricked. [131012690030] |It turns off after a few minutes and doesn't get past the load screen. [131012690040] |It has important data on it which wasn't backed up. [131012690050] |I'm trying to access the hard-drive's contents without resorting to a hard-drive recovery company. [131012690060] |Someone mentioned to me that you can use a Linux machine and the program Amarok to access the drive's contents. [131012690070] |However, my phone wasn't jailbroken nor had the usb tethering option enabled. [131012690080] |Plus, it had a passcode on it, so I'm guessing this method wouldn't work. [131012690090] |Are there any other ways you can think of to access the drives contents using a Unix machine? [131012690100] |Is there any way to take it apart and connect a cable directly to the hard-drive? [131012700010] |Take a look at this article to see if it will give you what you need. [131012700020] |I haven't tried it yet, as I was looking at tethering which involves compiling some kernel drivers, but I don't think you'll need it for just accessing the contents. [131012700030] |Good luck! [131012710010] |Advice on running a headless laptop [131012710020] |My current laptop is falling apart, specifically the screen hinges. [131012710030] |Once (or ideally before) they break, I'd like to run the machine as a storage device. [131012710040] |I strongly prefer using GUIs over CLI, any recommendations on a distro/useful tools to install before turning it into a box. [131012710050] |Any cautionary tales about running a laptop without a screen? [131012710060] |So far I imagine installing Ubuntu (not Server) and running Remmina to remoting in via VNC/etc. [131012710070] |I would like to tag this question "headless", but I don't have the reps :) [131012720010] |You could also use X forwarding over ssh to run the occasional GUI tool (you don't need an X-server on the "storage device" then to run GUI apps). [131012730010] |A better alternative to VNC is NX, but it's a bit harder to setup. [131012740010] |What does a "OVM reboot" mean? [131012740020] |What does this term mean? [131012750010] |OVM probably means Oracle VM in case. [131012750020] |So OVM reboot probably mean to reboot the virutal machine from Oracle VM Manager. [131012760010] |how to remove gnome to run only compiz [131012760020] |Hello all, [131012760030] |Regarding this question I really like the idea of running compiz without a desktop environment. [131012760040] |I currently have Ubuntu 10.04 (with gnome and compiz) and want to give it a try. [131012760050] |Now how can I configure my system to enable compiz running without gnome (then I can remove gnome completely)? [131012770010] |I think this wiki page is what you are looking for. [131012780010] |How to access the history on the fly in unix? [131012780020] |for example, if I do a [131012780030] |I remember there is some easier way to switch to thisismyfolder912 than having to do a [131012780040] |What is that way and how does it work? [131012780050] |Also, what are the other ways I can use this? [131012790010] |Are you talking about classic history expansion, or Readline processing? cd !$ on the next input line will substitute in the last argument of the previous line, or M-. or M-_ will yank it using Readline. [131012800010] |It's as simple as Alt + . [131012800020] |Alt + . [131012810010] |If your question is about accessing command history, then try this well-named command [131012810020] |You can also try Ctrl + r, and start typing a command you're trying to remember that you've recently typed. [131012810030] |Hit ESC to select the command or exit. [131012810040] |This works for me on SuSE at least; not sure about other distros. [131012820010] |If you use bash i suggest pushd and popd. [131012820020] |You can create a stack of directory and browse it rapidly. [131012820030] |See this example: [131012830010] |this has always worked for me: [131012840010] |Picking up a tip from another thread, if you put: [131012840020] |in your .bashrc then, you can start typing something from your history, and then press the up arrow, and then rather than going through your history item by item, it'll skip right to previous entries that begin with what you've already typed. [131012840030] |I guess this doesn't help much with the particular example given in the question, but it is one thing that helps me access history on the fly. [131012850010] |What About pressing ATL+. together.So you can iterate through all the previous command [131012860010] |On a related note, I recommend using histverify in bash. [131012860020] |Put this in your ~/.bashrc: [131012860030] |This will cause bash to print out the command after expanding !$ or other history functions, and give you a chance to look at it before hitting enter again to actually run it. [131012860040] |For me, the sanity check is worth the occasional extra key press. [131012860050] |Want to make sure I'm running the cd foo command, not the rm -rf foo one... [131012860060] |I frequently use the Ctrl-R approach, as well as Alt-. (which is a good fit for the scenario you describe). [131012860070] |I'll use !$ on occasion. [131012860080] |These are very useful general purpose techniques. [131012860090] |But to address your specific question: [131012860100] |Making a directory and cd'ing directly into it is such a common combination that it is useful to have a function to wrap it up.. [131012860110] |Usage: mcd thisismyfolder [131012870010] |Where did the "wheel" group get its name? [131012870020] |The wheel group on *nix computers typically refers to the group with some sort of root-like access. [131012870030] |I've heard that on some *nixes it's the group of users with the right to run su, but on Linux that seems to be anyone (although you need the root password, naturally). [131012870040] |On Linux distros I've used it seems to be the group that by default has the right to use sudo; there's an entry in sudoers for them: [131012870050] |But that's all tangential; my actual question is: Why is this group called wheel? [131012870060] |I've heard miscellaneous explanations for it before, but don't know if any of them are correct. [131012870070] |Does anyone know the actual history of the term? [131012880010] |Wikipedia knows it? [131012880020] |The term is derived from the slang term big wheel, referring to a person with great power or influence [131012890010] |It comes to us from BSD. [131012890020] |This is verifiable. [131012890030] |But where did it begin? [131012890040] |Here is a non-verifiable explanation- BSD got it from the TOPS-20 O/S. [131012890050] |http://lists.freebsd.org/pipermail/freebsd-chat/2003-December/001725.html [131012900010] |The Jargon File has an answer which seems to agree with kmarsh. [131012900020] |wheel: n. [from slang ‘big wheel’ for a powerful person] A person who has an active wheel bit...The traditional name of security group zero in BSD (to which the major system-internal users like root belong) is ‘wheel’... [131012900030] |A wheel bit is also helpfully defined: [131012900040] |A privilege bit that allows the possessor to perform some restricted operation on a timesharing system, such as read or write any file on the system regardless of protections, change or look at any address in the running monitor, crash or reload the system, and kill or create jobs and user accounts. [131012900050] |The term was invented on the TENEX operating system, and carried over to TOPS-20, XEROX-IFS, and others. [131012900060] |The state of being in a privileged logon is sometimes called wheel mode. [131012900070] |This term entered the Unix culture from TWENEX in the mid-1980s and has been gaining popularity there (esp. at university sites). [131012910010] |AFAIK, this is not the derivation, but ... [131012910020] |wheel is for real people, whereas users is for losers. [131012910030] |At least that's the way some Unix sysadm's think :-). [131012920010] |As others have said, it comes from the term "Big Wheel". [131012920020] |I think many of us are not familiar with this term because, according to at least one site, it became a popular expression after World War Two: [131012920030] |Big wheel is another way to describe an important person. [131012920040] |A big wheel may be head of a company, a political leader, a famous doctor. [131012920050] |They are big wheels because they are powerful. [131012920060] |What they do affects many persons. [131012920070] |Big wheels give the orders. [131012920080] |Other people carry them out. [131012920090] |As in many machines, a big wheel makes the little wheels turn. [131012920100] |Big wheel became a popular expression after World War Two. [131012920110] |It probably comes from an expression used for many years by people who fix the mechanical parts of cars and trucks. [131012920120] |They said a person "rolled a big wheel" if he was important and had influence. [131012920130] |For those like me who were born in the 1980s, we may find the following a closer cultural reference for a Big Wheel: [131012930010] |Caching bugzilla webpages for offline use [131012930020] |I have a long wifi free journey ahead of me and I would like to continue to productively review bugs even without the ability to change them. [131012930030] |Is there an easy way to cache all the pages generated from the query below? [131012930040] |https://bugzilla.gnome.org/buglist.cgi?product=banshee&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&bug_severity=blocker&bug_severity=critical&bug_severity=major&bug_severity=normal&bug_severity=minor&bug_severity=trivial&gnome_version= [131012940010] |You can use the command wget with the option --recursive. [131012940020] |But be aware that it could download a lot of sites :) . [131012940030] |To limit the result you could also use the the argument --domains=domain-list if you only want sites from a specific domain (or several domains separated by commas) and the --level=depth-argument to specify the level of recursion. [131012940040] |So, your command could look something like this: [131012940050] |But there's a lot more options to wget. [131012940060] |Check out the man page for wget. [131012950010] |Checkout App::SD on your local CPAN. [131012950020] |SD is a peer to peer bug tracking system which we've built to share with just about anything. [131012950030] |Contributors have helped us write adaptors for RT, Hiveminder, Trac, GitHub, Google Code, and Redmine. [131012950040] |You can extend SD to sync to other bug tracking tools by writing a small bit of code. [131012950050] |If you can help make SD work better with your bug tracker, drop us a line. [131012950060] |update I just realized that list doesn't yet include Bugzilla... sorry. but I'm gonna leave the answer up in the event that it does in the future, or someone wants to use if for offline caching of another bugtracker, or maybe you want to add bugzilla support. [131012960010] |As this would cover any Bugzilla installation I wonder if there is a recommend way upstream for this. [131012970010] |Is nohup indefinite? [131012970020] |If I run a program with an infinite loop with nohup, will the program run indefinitely (until the machine is reset or until I manually terminate it)? [131012980010] |Nohup sets the default behavior of the HANGUP signal, which might get overriden by the application. [131012980020] |Other signals from other processes with permission (root or same user) or bad behavior (seg faults, bus errors) can also cause program termination. [131012980030] |Resource limitations (ulimit) can also end the program. [131012980040] |Barring these, your infinite loop might well run a very long time. [131012990010] |Following are the two important line from nohup's source code. [131012990020] |(void)signal(SIGHUP, SIG_IGN); // Ignore SIGHUP [131012990030] |execvp(*argv, argv); // Run the command. [131012990040] |So until your application calls something like [131012990050] |signal (SIGHUP, SIG_DFL); //OR some custom signal handler [131012990060] |In the normal circumstances it will continue to run until you restart machine or kill it yourself. [131012990070] |It might also be terminated in the cases pointed by kmarsh. [131013000010] |How to set up two-factor authentication with OTP on FreeBSD? [131013000020] |I have a FreeBSD hosted server that I like to be able to get to from anywhere. [131013000030] |Normally I use SSH publickey to log in, or if I don't have my SSH private key available then I might use regular password over SSH. [131013000040] |However, when logging in from an untrusted machine there's always the risk of a keylogger capturing my password as I type it. [131013000050] |FreeBSD already has support for OPIE which is a one-time password scheme. [131013000060] |This works great, but the one-time password is the only authentication needed. [131013000070] |If I print out a list of one-time passwords to use later, then if I lose that list then that's all somebody needs. [131013000080] |I'd like to set up the authentication so that I need a one-time password plus something I know (a password, except not my usual login password). [131013000090] |I have a feeling the answer has something to do with PAM (and /etc/pam.d/sshd) but I'm not certain on the details. [131013000100] |How can I set up authentication where two methods are required? [131013010010] |Assuming this uses pam, it should be as simple as putting two required modules in /etc/pam.d/. [131013010020] |One for Opie, and one for your other auth. (say, normal UNIX password) [131013020010] |Since you want to use a password that is something other than the one for your normal account, try security/pam_pwdfile from the ports tree. [131013020020] |Basically, it allows you to use an alternate file (format: "username:crypted_password") to authenticate against. [131013020030] |To use it put the following line in /etc/pam.d/sshd right before the line for pam_opie: [131013020040] |auth required /usr/local/lib/pam_pwdfile.so pwdfile /path/to/pwd/file [131013030010] |How to make screen -R attach to the youngest detached session? [131013030020] |I'm using screen on debian lenny, and I would like to use the -R option. [131013030030] |From man screen: [131013030040] |However, when I run screen -R it does not actually attach to the youngest detached session. [131013030050] |Instead, it complains that there are "several suitable screens" and that I need to choose one of them. [131013030060] |Am I missing something? [131013030070] |How do I make this work as advertised? [131013040010] |Try using screen -RR. [131013040020] |Example: [131013040030] |Note that screen 5958 is the youngest. [131013040040] |Using screen -RR connects to screen 5958. [131013040050] |The -RR options is somewhat further explained in the documentation for -d -RR. [131013040060] |Another trick I often use is to use -S to give the screen a tag/label. [131013040070] |Then you can reattach using that tag without having to remember what was happening in each screen if the list gets unwieldy. [131013040080] |Example (Launch screens for vim and curl): [131013040090] |Note: The -dm option was just used to start a detached screen [131013040100] |And then, at a later date, you can easily reconnect using the tag curl. [131013050010] |I use screen -R in gnome-terminal and, as you said, if 2 sessions are running at the same time, then gnome-terminal quits immediately. [131013050020] |I solved it by running [131013050030] |this works just as you expect. [131013060010] |how can I edit large file? [131013060020] |I have few files. size >1 GB each. [131013060030] |I need to remove last few bytes from the files. [131013060040] |How can I do it ? [131013060050] |I am on HP-UX. [131013060060] |EDIT: I prefer to edit file in place to save disk space. [131013070010] |You can use dd for example: [131013080010] |Cut 2 kilobytes from end of file: [131013090010] |Try using hexedit I haven't tried it on HP-UX but it should work. [131013090020] |It allows you to move to a location in a file and truncate. [131013090030] |I'm pretty sure that it does not read the whole file in but just seeks to the appropriate location for display. [131013090040] |Usage is fairly simple once you have launched it the arrow keys allow you to move around. [131013090050] |F1 gives help. [131013090060] |Ctrl-G moves to a location in the file (hint: to move to end use the size of the file from the bottom row of the display). [131013090070] |Position the cursor on the first byte that you want to truncate and then press Escape T once you confirm the truncate will have been done. [131013090080] |Ctrl-x exits. [131013100010] |You can use split or ed, awk or any programming language. [131013110010] |Use a tool that gives you access to the truncate system call. [131013110020] |You can do it with only POSIX tools. [131013110030] |Warning, typed into a browser; be especially careful as dd is even more unforgiving of errors than the usual unix command. [131013110040] |123456 is the number of bytes to keep. [131013110050] |A Perl version is much more readable: [131013120010] |How do you create a user with no password? [131013120020] |In os X it's possible to have users without passwords. [131013120030] |If you inspect them with dscl their password show up as *. [131013120040] |This is used for system users such as users for databases like mysql, pgsql etc. [131013120050] |What's nice about this is that these users doesn't show up at the login screen and you can't login as them without sudo etc. [131013120060] |After deleting such a user trying some things out I wanted to recreate it, but couldn't. [131013120070] |I could manage to get a user with a blank password and I could set the password to *. [131013120080] |But none had the same effect as above, as the user showed up on my login screen. [131013120090] |How do you create such a user? [131013130010] |If you are on Ubuntu you can swap in the password for the Ubuntu user in /etc/shadow/ check out this article: [131013130020] |http://www.psychocats.net/ubuntucat/creating-a-passwordless-account-in-ubuntu/ [131013140010] |A user in /etc/passwd with home directory of /dev/null and shell of /sbin/nologin, can be used for sudo ing commands, but can't actually be logged into, for example; [131013140020] |from my /etc/passwd [131013140030] |I half remember this will make it disappear from the XDM login screen too, but I don't use XDM anymore, so that's a guess :-) [131013150010] |You can run passwd --delete after creating the user with adduser. [131013150020] |After this, you'll be able to login without entering a password. [131013160010] |I can't be certain about OSX, but on FreeBSD you can use: [131013160020] |the value of - passed to the -h option tells pw to set the password field in master.passwd to *, thus creating a no-login account. [131013160030] |Setting the shell isn't strictly necessary, but avoids a shell check from /etc/pw.conf. [131013170010] |What does the -f parameter do in the tar command. [131013170020] |When using tar I always include -f in the parameters but I have no idea why. [131013170030] |I looked up the man and it said; [131013170040] |-f, --file [HOSTNAME:]F [131013170050] |use archive file or device F (default "-", meaning stdin/stdout) [131013170060] |But to be honest I have no idea what that means. [131013170070] |Can anyone shed any light on it? [131013180010] |It lets you specify the file or device you're going to be working with. [131013180020] |Either creating, updating or extracting things from it depending on other supplied flags. [131013180030] |For example: [131013190010] |"-f" tells tar that the next parameter is the file name of the archive or STDOUT if it is "-". [131013200010] |Quite simple. [131013200020] |If you oppress the -f parameter, output is passed to stdout: [131013200030] |...what a mess! [131013200040] |The -f-parameter (as you quoted) expects a filename (and optionally a hostname), hence the first argument after it is the output filename: [131013210010] |Better bash history [131013210020] |I consistently have more than one terminal open. [131013210030] |Anywhere from two to ten, doing various bits and bobs. [131013210040] |Now let's say I restart and open up another set of terminals. [131013210050] |Some remember certain things, some forget. [131013210060] |I want a history that: [131013210070] |
  • Remembers everything from every terminal
  • [131013210080] |
  • Is instantly accessible from every terminal (eg if I ls in one, switch to another already-running terminal and then press up, ls shows up)
  • [131013210090] |
  • Doesn't forget random things if there are spaces at the front of the command.
  • [131013210100] |Anything I can do to make bash work more like that? [131013220010] |I'm not aware about bash but for sure it is one of selling point of zsh shell. [131013220020] |Personally I prefere zsh over bash so I'd advice try it. [131013220030] |Part of my ~/.zshrc that deals with history: [131013230010] |I can offer a fix for that last one: make sure the env variable HISTCONTROL does not specify "ignorespace" (or "ignoreboth"). [131013230020] |But I feel your pain with multiple concurrent sessions. [131013230030] |It simply isn't handled well in bash. [131013250010] |Heavy write activity on SSD nukes system performance [131013250020] |I've noticed that when I do heavy write applications, the whole system slows down. [131013250030] |To test this further I ran this to do a (relatively) low-CPU, high disk activity: [131013250040] |This pumps out tens of thousands of strings per second to a file on my system disk. [131013250050] |When it's doing this, the mouse lags, TTYs become unresponsive, applications "fade" and generally the whole computer becomes unusable. [131013250060] |When I can eventually Control+C john, the system comes back to full strength after a few seconds. [131013250070] |This is an extreme example but I have similar issues with slightly less write-intensive activities like copying big files from fast sources or transcoding. [131013250080] |My main OS disk is a quite fast SSD (OCZ Agility 60GB) with EXT4. [131013250090] |If I write john's output to a mechanical disk with EXT4, I don't experience the same slow-downs though the rate is a lot slower (SSD does ~42,000 words per second, mechanical does 8,000 w/s). [131013250100] |The throughput may be relevant. [131013250110] |The mechanical disk also has nothing to do with the system. [131013250120] |It's just data. [131013250130] |And I'm using kernel 2.6.35-2 but I've noticed this issue since I got this SSD when I was probably using .31 or something from around that time. [131013250140] |So what's causing the slowdown? [131013250150] |EXT4 issue? [131013250160] |Kernel issue? [131013250170] |SSD issue? [131013250180] |All of the above? [131013250190] |Something else? [131013250200] |If you think I need to run an additional test, just drop a comment telling me what to do and I'll append the result to the question. [131013260010] |This has been a known issue for awhile. [131013260020] |Using an SSD-tuned FS like Btrfs might help, but it might not. [131013260030] |Ultimately, it is a bug in the IO scheduler/memory management systems. [131013260040] |Recently, there have been some patches that aim to address this issue. [131013260050] |See Fixed: The Linux Desktop Responsiveness Problem? [131013260060] |These patches may eventually make their way into the mainline kernel, but for now, you will probably have to compile your own kernel if you want to fix this issue. [131013270010] |There are a few things you can check on to try to improve SSD performance under Linux. [131013270020] |1) Set the mount point to 'noatime'. [131013270030] |Extra activity updating access times are generally wasted on most use cases. [131013270040] |Especially in the case of continually pumping single lines into a file, you are forcing multiple updates to the filesystem for every access. [131013270050] |2) Check the elevator. [131013270060] |The default elevator for most distros are set up for random access spinning platters. [131013270070] |SSD's don't need the extra logic, so setting the elevator to noop can improve performance by letting the hardware manage the writes. [131013270080] |3) Write-through v write-back caching. [131013270090] |This is a bit more esoteric, but you can check the caching method used with hdparm for the device. [131013270100] |Write-back caching can have a positive impact on SSD performance compared to write-through. [131013280010] |How to yank a particular line without moving the cursor in vim? [131013280020] |For example [131013280030] |How can I yank and paste Line 4 only to Line 12 without having to move the cursor to Line 4? [131013290010] |This should do it: [131013300010] |Try this: [131013310010] |If the cursor is already on line 12, then a simple [131013310020] |does it for me. [131013320010] |How about this: Cursor is on line 11, you're in "vi" mode. [131013320020] |You can apparently also do it with a pattern: [131013320030] |You could use "mo" (move) instead of "co" (copy) to just move the line, instead of yank and put. [131013330010] |Try: [131013330020] |You can use an argument of 0 to paste to line 1. [131013330030] |This will also work with ranges: [131013330040] |will copy lines m through n to line k+1. [131013330050] |In addition it doesn't matter where you are in the buffer. [131013330060] |The move command, m, works similarly. [131013340010] |Stop program running at startup in Linux [131013340020] |How do I stop a program running at startup in Linux. [131013340030] |I want to remove some apps from startup to allow them to be managed by supervisord e.g apache2 [131013350010] |Depending on your distro use the chkconfig or update-rc.d tool to enable/disable system services. [131013350020] |On a redhat/suse/mandrake style system: [131013350030] |On Debian: [131013350040] |Checkout their man pages for more info. [131013360010] |On Ubuntu 10.04 you can control some startup programs from the GUI. [131013360020] |System-->Preferences-->Startup Applications [131013370010] |Slackware and Arch linux have similar methods of stopping/starting processes at boot, different than the Ubuntu and Redhat-style examples given above. [131013370020] |In both Slackware and Arch linuxes, sh scripts exist in directory /etc/rc.d, typically one script per daemon, or one script per subsystem. [131013370030] |For example, Slackware starts the Apache web server with a script /etc/rc.d/rc.httpd, called at the appropriate time during system startup with an argument of "start". [131013370040] |Arch linux has differently-named scripts, but the same sort of thing goes on. [131013370050] |To keep some process from starting during system boot, on Slackware, you just make the appropriate script in /etc/rc.d not executable. [131013370060] |To keep Apache from starting at the next boot: [131013370070] |To stop an Apache that got started at boot: /etc/rc.d/rc.httpd stop You'll need to be root. [131013370080] |Arch is a bit more complex. [131013370090] |The file /etc/rc.conf, a shell script, has an array DAEMONS. [131013370100] |To keep Apache from starting at boot, you'd change this line in /etc/rc.conf: [131013370110] |To this line: [131013370120] |To stop an already executing apache, you'd execute /etc/rc.d/httpd stop as root. [131013380010] |If you are dealing with a modern Ubuntu system and a few other distros you may have to deal with a combination of traditional init scripts and upstart scripts. [131013380020] |Managing init scripts is covered by other answers. [131013380030] |The following is one way to stop an upstart service from starting on boot: [131013380040] |The problem with this method is that it does not allow you to start the service using: [131013380050] |An alternative to this is to open the servicename.conf file in your favorite editor and comment out any lines that start with: [131013380060] |That is, change this to [131013380070] |where the "..." is whatever was after "start on" previously. [131013380080] |This way, when you want to re-enable it, you don't have to remember what the "start on" parameters were. [131013380090] |Finally, if you have a new version of upstart you can simply add the word "manual" to the end of the configuration file. [131013380100] |You can do this directly from the shell: [131013380110] |This will cause upstart to ignore any "start on" phrases earlier in the file. [131013390010] |Configuring IPoD on Linux [131013390020] |I use Fedora 13 and very recently i brought a new Apple iPOD shuffle. [131013390030] |I would like to know whether i can transfer music without using iTunes into my iPOD. [131013390040] |I tried using gtkPOD and RhythmicBox, but thats is of no avail. [131013400010] |I'm not sure about state now but Apple is know for play of cat and mouse. [131013400020] |You may find one day that update of software of iPod had broken its compatibly with Linux by completly redesigning its format. [131013400030] |Until one day someone reverse engeneer the new format and supplied patches for projects. [131013400040] |It lasts as long as Apple would not decide to switch format again. [131013400050] |In short: iPod is not the best player for Linux enthusiast but when you have it you may be able to use it. [131013400060] |PS. [131013400070] |Also Banshee have iPod support [131013410010] |Your IPod should work with gtkpod which uses the libgpod library, like almost every linux application. [131013410020] |Have a look at http://gtkpod.wikispaces.com/Supported+iPods for your model. [131013410030] |After that, have a look at the "Getting Started" Page of the gtkpod project: http://www.gtkpod.org/wiki/Getting_started [131013420010] |How to zip individual files from different directories in one line? [131013420020] |I want to zip numerous files from different directories(paths) in one line. [131013420030] |How can I do that? [131013430010] |I'm probably misunderstanding you, but you just specify the paths one after the other: [131013440010] |How to set default file permissions for all folders/files in a directory? [131013440020] |I want to set a folder such that anything created within it (directories, files) inherit default permissions and group. [131013440030] |Lets call the group "media". [131013440040] |And also, the folders/files created within the directory should have g+rw automatically. [131013440050] |I am fairly certain there is a way to do this and cannot think of what its called at the moment. [131013450010] |I found it: Applying default permissions [131013450020] |From the article: [131013450030] |Next we can verify: [131013450040] |Output: [131013460010] |You can run find and give it chmod +whatever as execute argument. [131013460020] |You should be able to find the post that contains the exact information when you search on ubuntuforums for user witchcraft and permission or chmod [131013470010] |Convert ascii code to hexadecimal in UNIX shell script [131013470020] |I'd like to convert ASCII code (like - or _ or . etc) in hexadecimal representation in Unix shell (without bc command), eg : - => %2d [131013470030] |any ideas ? [131013480010] |There's a printf tool that simulates the C function; normally it's at /usr/bin/printf, but a lot of shells implement built-ins for it as well. [131013480020] |You can use %02x to get the hex representation of a character, but you need to make sure you pass a string that includes the character in single-quotes (Edit: It turns out just a single-quote at the beginning is sufficient): [131013480030] |You can make a shell function for convenience: [131013490010] |Try od: [131013490020] |$ echo -n "-_." | od -A n -t x1 [131013490030] |2d 5f 2e [131013490040] |-A n means do not print offsets and -t x1 means that the type of the input is hexadecimal integers of 1 byte. [131013500010] |Support for usb 3.0? [131013500020] |I could not seem to find the question here but I was wondering what experience if any users have had with USB 3.0? [131013500030] |My asus p5x58d has usb 3 on it but I have yet to try it out and also do not know much in terms of support. [131013500040] |I feel this is best as a community wiki where we can share experience on USB 3.0. [131013500050] |This article discussed briefly linux and usb 3. [131013500060] |Let the comments flow... [131013510010] |Tunnel traffic through another machine over ssh [131013510020] |I want to be able to route a portion of my traffic through another machine over SSH. [131013510030] |For example, is it possible to browse the web through the ssh tunnel and also browse the web through your LAN connection without much effort? [131013510040] |(Ie. i want a seamless transition from using tunnel to using lan) [131013510050] |Thus it is not a simple how to do I tunnel ALL web traffic through ssh tunnel but moreso how can I setup a tunnel that I can use at my discretion but not impeded my normal traffic flow. [131013510060] |(Kind of on an as needed basis) [131013510070] |I would like the filtered traffic to be encrypted when leaving the lan and it could be ftp, ssh, web, mail whatever traffic. [131013510080] |Some questions I think I need to answer/address: Does this require multiple nics? [131013510090] |Does this require setting up a proxy? [131013510100] |Can I even do this the way I want it to function? [131013510110] |Are there other options aside from a tunnel to achieve the result of having some traffic (user decides) encrypted through another machine (outside the LAN) and have other traffic use the normal flow through the LAN and out into the internet world? [131013510120] |Edit: If some of the answers depend on specific traffic types I can be more specific and say that web traffic is a primary focus of this however other traffic such as IM/Email traffic would be desirable. [131013520010] |I don't know if that's what you're looking for, but you can use ssh -D4545 domain.com to open a socks proxy tunnel at port 4545 to the desired machine from your computer. [131013520020] |you can then set up that proxy in your application (say Firefox) and use a plugin to quickly engage and disengage the proxy settings (something like TorButton). [131013520030] |There is one drawback though, when your target host has more than one network device, you cannot control which one will your data take to leave the machine you're tunneling to. [131013520040] |This can be circumvented by installing a normal proxy server on that machine, routing the traffic from, say, localhost:3128 to the desired network interface, and then doing a ssh -L4545:localhost:3128 and pointing all applications to 4545 that ought to use the proxy. [131013520050] |This gives you greater control on the proxy side, as the integrated proxy of SSH is not really configurable. [131013530010] |I accidentally chmod -R +x on a directory. How do I restore the correct permissions? [131013530020] |Well, to be specific, it was chmod -R 755. [131013530030] |Now every file is executable, which I don't want. [131013530040] |I am thinking that I should look at the first two bytes of each file for the #!, but will this cover everything? [131013530050] |Should I instead use file to look at everything and base my decision on that? [131013530060] |Or, more likely, is there an even better way to do this? [131013530070] |What is the preferred way to recursively go through a directory and set -x on files that are not 'supposed to be' executable? [131013540010] |Unfortunately, you can't rollback. [131013540020] |If you have a backup, you're in luck. [131013540030] |You can create a script that will compare permissions from your backup, and reproduce them on your existing folder. [131013550010] |Well, without a shebang line, the file will be executed as a shell script, nominally with /bin/sh. [131013550020] |You're idea is a good start and on the assumption that the directory in question doesn't contain mission-critical files there probably isn't much risk to executing some grep and chmod combo. [131013550030] |You may encounter false positives, i.e., files with a shebang line that are not meant to have their executable bit set but without knowing more information on the purpose of what's in the directory, only you can decide whether that poses a significant existential threat to your system and/or data. [131013560010] |I believe you will want something like [131013560020] |find dir -type f -exec chmod ugo-x '{}' + [131013560030] |This looks for all regular files, recursively in dir, (it excludes directories, and devices) and removes the executable bit. [131013560040] |I would start here, and then work my way towards making files that are supposed to be executable, executable. [131013560050] |The following should work exactly as you asked for ( it will find all regular files, grep them for #! and then remove the x bits if not found ) [131013560060] |possibly a better version of the above (less pipes) [131013570010] |There's no magic bullet here. [131013570020] |The permissions carry information which is not always redundant. [131013570030] |If you'd done this in a system directory, your system would be in a very bad state, because you'd have to worry about setuid and setgid bits, and about files that are not supposed to be world-readable, and about files that are supposed to be group- or world-writable. [131013570040] |In a per-user directory, you have to worry about files that aren't supposed to be world-readable. [131013570050] |No one can help you there. [131013570060] |As for executability, a good rule of thumb would be to make everything that doesn't look like it could be executed, be nonexecutable. [131013570070] |The kernel can execute scripts whose first two bytes are #!, ELF binaries whose first four bytes are \x7fELF where \x7f is the byte with the value 12, and a few rarer file types (a.out, anything registered with binfmt_misc). [131013570080] |Hence the following command should restore your permissions to a reasonable state (assumes bash 4 or zsh, otherwise use find to traverse the directory tree; warning, typed directly into the browser): [131013570090] |Note that there is a simple way to back up and restore permissions of a directory tree, on Linux and possibly other unices with ACL support: [131013580010] |Installing a second hard drive [131013580020] |My system currently runs an XBMC live install. [131013580030] |I am installing a second hard drive in my system, but I am somewhat of a linux newb and only know the basics. [131013580040] |Since it is XBMC I believe I need to do everything from the command line. [131013580050] |Can anyone give me a little step by step on what I need to do with the commands and proper parameters? [131013580060] |As an aside, I am planning on formatting as ext2. [131013580070] |The game plan is to share this drive on my network, so I can copy files to it from my Mac running OSX. [131013580080] |Should I use a different format? [131013590010] |So I am not sure if this is correct or the best way but this is what I did and it seems to work: [131013590020] |
  • As root: fdisk -l to see all the partitions and find how mine is listed. [131013590030] |It will be something like /dev/sda1. [131013590040] |As a note the drive I installed has been used before so it had existing partitions. [131013590050] |With an un-partitioned drive I have a feeling this will not work.
  • [131013590060] |
  • As root: fdisk /dev/sda to run fdisk. [131013590070] |You drop the number off the end to get the physical drive name. [131013590080] |Type: p to list the partitions on the drive again. [131013590090] |This is mainly a sanity check to make sure you are working on the correct drive. [131013590100] |If you have partitions on the drive you need to delete then type d and follow the prompts to delete them. [131013590110] |Type: n to create a new partition. [131013590120] |It will prompt you to create an extended or primary partition. [131013590130] |I did primary so I did p. [131013590140] |It will then prompt for partition number. [131013590150] |I did 1. [131013590160] |It will then prompt for first cylinder number. [131013590170] |I just hit enter for the default of 1. [131013590180] |It will then prompt for the last cylinder number. [131013590190] |I just did the default based on my disk size and hit enter. [131013590200] |You can the type p again to verify the new partition is entered correctly. [131013590210] |Type t to enter the hex code for the type of partition you want. [131013590220] |I did 83 for ext2. [131013590230] |Type w to write the partition table.
  • [131013590240] |
  • As root: mkfs -t ext2 /dev/sda1 to actually format the partition.
  • [131013590250] |
  • As root: fsck -f -y /dev/sda to check the drive, and set it up to be mounted upon each reboot.
  • [131013590260] |
  • Reboot your box.
  • [131013600010] |Ext2 does not do journalling. [131013600020] |I.e. if have a power-loss or something like that, there is a probability to lose file meta-data with ext2. [131013600030] |Plus, a fsck-run is absolutely needed after a crash which will take large amounts of time on current sized disks. [131013600040] |Thus, just use ext3 of xfs, which do both have journaling. mkfs.xfs runs faster. ext4 is relatively new, and one is usually a little conservative when it comes to filesystems. [131013600050] |If you want to use your complete disk under linux, you do not even need to partition it. [131013600060] |You can just use /dev/sdX then when creating or mounting the disk. [131013600070] |If you want to partition it, use cfdisk, since it has a convenient user interface. [131013600080] |Be sure to use the right devices for creating the filesystem. [131013600090] |Check via [131013600100] |What devices are available and already in use. [131013600110] |shows the vendor/model information and size and more to double check, if you get the right device. [131013600120] |Create filesystem then: [131013600130] |or [131013600140] |Test to mount it via [131013600150] |If the mount point does not exist, you have to create it first via mkdir. [131013600160] |You can change the ownership of the base directory after mounting it via [131013600170] |To mount the disk after each boot, usually you configure it via /etc/fstab [131013600180] |Since you use a Live-CD, perhaps they have a different style of configuration. [131013600190] |To check if some hardware problems happened during mkfs you can enter [131013600200] |and check the most recent output. [131013600210] |On alternative to having to specify a device name in fstab is to specify a label during filesystem creation (e.g. mkfs.ext3 -L name) and use LABEL=name in the fstab (or with mount) instead of the device name. [131013610010] |Tell mplayer to prevent the screensaver from kicking in while playing [131013610020] |I know mplayer has some heartbeat setting but I don't recall what it is, could anyone tell me? [131013610030] |another one of those annoying things that used to just work and somewhere along the line stopped being default [131013620010] |mplayer has the switch -heartbeat-cmd to run a command every 30 seconds, but as the man page says: [131013620020] |This can be "misused" to disable screensavers that do not support the proper X API [131013620030] |The actual switch meant to disable screensavers is -stop-xscreensaver; you should probably try that first [131013630010] |What is the Fedora equivalent of the Debian build-essential package? [131013630020] |What is the Fedora equivalent of the Debian build-essential package? [131013640010] |The closest equivalent would probably be to install the below packages: [131013640020] |However, if you don't care about exact equivalence and are ok with pulling in a lot of packages you can install all the development tools and libraries with the below command. [131013650010] |Is there any program to provide a consistent interface across multiple archive types? [131013650020] |At the moment, if I download a compressed file, it could be any of a .tar.gz archive, a tar.bz2 arhive, a .zip archive or a .gz archive. [131013650030] |And each time I do so, I have to remember what the command line options for that program are. [131013650040] |Is there any CLI program where I can just go: [131013650050] |undocompression somefile.?? [131013650060] |and let it figure out what format the archive is in? (overly long name used to avoid conflicting with any real program) [131013660010] |GNU tar (and star) has at least some compression auto-detection capabilities: [131013660020] |just work. [131013670010] |I think ark the KDE archiving tool can be run without a GUI. [131013670020] |From the ark manpage [131013670030] |Will extract archive.tar.bz2 into the current directory without showing any GUI. [131013670040] |Arks support of various archive formats depends on which apps you have installed (e.g. for rar it depends on unrar ), but I don't know of any formats it can't handle. [131013680010] |You can use p7zip. [131013680020] |It automatically identifies the archive type and decompress it. [131013680030] |p7zip is the command line version of 7-Zip for Unix/Linux, made by an independent developer. [131013680040] |7z e [131013690010] |In Debian/Ubuntu there is the unp package, which is a Perl script that acts as a frontend for many archiving utilities. [131013700010] |From another question: atool, which also handles various archive types and is more powerful than unp because it also handles listing of contents, finding differences between archives etc. [131013710010] |I found this little snippet a while ago and have been using it since. [131013710020] |I just have it in my .bashrc file [131013720010] |How does "top" command show live results? [131013720020] |How can I write a shell script that shows the results in real time? [131013720030] |Something like the top command that updates the results after some fixed intervals of time. [131013730010] |It would help if you were a lot more specific about what you are trying to do. [131013730020] |Here is an extremely simplistic example: [131013740010] |Most of that data is generally exposed in the /proc virtual file-system primitives. [131013740020] |Each process has an entry in /proc in a directory called the PID. [131013740030] |So /proc/5437 would have the primitives for the 5437 process. [131013740040] |Reading the primitives there and parsing appropriately would yet you close to what top does. [131013740050] |Top actually works by calling specific function calls that extract this information directly from the kernel instead of pulling it from files. [131013740060] |To do the same from bash you'd have to either pull it from the /proc virtual file system, or extract it out of other calls such as to ps. [131013740070] |As for real-time, that isn't quite doable at the level of detail top provides. [131013740080] |You can slice time fine enough that it appears to be real-time, but you will still be getting time-slices. [131013750010] |Top uses Curses and reads the /proc file system [131013760010] |Erm, in case you're looking at top output for a longer time, and not just to check if a program is doing fine, I suggest using htop. [131013760020] |It give's you a lot of real time information and is easier to control and manage. [131013760030] |You can change the layout of the output, such as bar graphs and columns. [131013770010] |you can use the watch(1) command to run your script at regular intervals: [131013770020] |This will run myscript.sh every 1 second clearing the screen between each run and with a timestamp in the corner. [131013770030] |You can use the -d option and it will even highlight differences in the output per run. [131013780010] |How can I write an OpenSuse script that will change my boot menu.lst file and add "nomodeset" automatically both to failsafe and normal boot option? [131013780020] |How can I write an OpenSuse script that will change my boot menu.lst file and add "nomodeset" automatically both to failsafe and normal boot option? [131013780030] |(This script is for suseStudio I'm going to write it under the "Run script whenever the appliance boots" for the first boot). [131013790010] |If you know what will definitely be surrounding it on the line, you can use sed or Perl with something like s/preceding-text$/preceding-text nomodeset$/g [131013790020] |The $ assumes that it's right at the end of the line. [131013790030] |The preceding text would have to be unique to the lines you need to hit. [131013800010] |Look for the kopt=, defoptions= and altoptions= lines beginning with ## in your menu.lst file. [131013800020] |These specify options to be used when the menu entries are first created by installing a new kernel. [131013800030] |No need to edit the file afterwards. [131013800040] |You can also update all your existing entries by running update-grub. [131013800050] |For grub2, the file to edit is /etc/default/grub, and the same command to update. [131013800060] |However, the parameters to change are GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX. [131013800070] |For a one-off change, you can make temporary edits to the options directly in grub, by entering the "edit commandline" mode. [131013810010] |Read the release notes [131013810020] |Basically, you can use your favorite text editor to edit /boot/grub/menu.lst as root. [131013810030] |Add nomodeset to the kernel boot line. [131013810040] |Or set the NO_KMS_IN_INITRD sysconfig variable to yes via YAST. [131013820010] |Why is echo a shell built in command? [131013820020] |Why is echo not an independent utility like ls, ps, cat etc? [131013820030] |Why is it shell specific? [131013820040] |Any good reasons? [131013830010] |According to the dash man page, there are two classes of builtins. [131013830020] |Some commands have to be built-in because they cannot work if they are external. cd is one such since if it were external, it could only change its own directory, it couldn't affect the current working directory of the shell. [131013830030] |The other class of commands are built into the shell purely for efficiency. [131013830040] |The dash man page mentions printf, echo, and test as examples. [131013830050] |In the beginning, the builtins in the second class were separate executables. [131013830060] |These separate executables are still available on every Unixy system I've used, even though they're also built into every shell you're likely to use. [131013830070] |By checking old manuals, I've been able to narrow the window where echo got built into the shell to some time after Unix System III was released (1982), where echo is not built-into the shell, but no later than Unix System V Release 3.1 (1987), where it is. [131013830080] |I don't have manpages for any of the major candidate systems that appeared between these two, so echo could have been made a builtin before SVR3. [131013830090] |If so, likely candidates are 4BSD, SVR1, or SVR2. I include BSD in the list because System V Release 1 brought a lot of BSD innovations back home to AT&T, including vi and curses, so that could explain echo's appearance as a builtin in the SVR3 manuals I have here even if it were a BSD feature originally. [131013840010] |Although most shells include a built-in echo nowadays, the GNU CoreUtils also include a standalone implementation of it: [131013840020] |It sounds like you don't have GNU Coreutils installed (most linux-based desktop &server OS have it installed by default, but embeded linux or other UNIX might use alternative collections of shell utilities instead). [131013840030] |BTW: if you look at Busybox, you'll see that ls, ps and cat are also built-in commands there (or at least can be; it's used for embeded systems and everything not needed can be left out). [131013850010] |According to the Bash Reference Manual, it's about convenience. [131013850020] |Shells also provide a small set of built-in commands (builtins) implementing functionality impossible or inconvenient to obtain via separate utilities. [131013850030] |For example, cd, break, continue, and exec) cannot be implemented outside of the shell because they directly manipulate the shell itself. [131013850040] |The history, getopts, kill, or pwd builtins, among others, could be implemented in separate utilities, but they are more convenient to use as builtin commands. [131013850050] |All of the shell builtins are described in subsequent sections. [131013850060] |The Advanced Bash Scripting Guide has a more detailed explanation: [131013850070] |"A builtin is a command contained within the Bash tool set, literally built in. [131013850080] |This is either for performance reasons -- builtins execute faster than external commands, which usually require forking off 1 a separate process -- or because a particular builtin needs direct access to the shell internals." [131013850090] |Also note that echo does exist as a standalone utility on some systems. [131013850100] |Here's what I have on my Darwin system (MacOSX 10.5.8 - Leopard) [131013850110] |echo is also available as a builtin, but apparently my scripts use /bin/echo on my Mac, and use a Bash builtin on most of my Linux &FreeBSD systems. [131013850120] |But that doesn't seem to matter, because the scripts still work fine everywhere. [131013860010] |There is a third reason for some commands to be built-in: They can be used when running external commands is impossible. [131013860020] |Sometimes a system becomes so broken that the ls command does not work. [131013860030] |In some cases, an echo * will still work. [131013860040] |Another (more important!) example is kill: If a system runs out of free PIDs, it is not possible to run /bin/kill (because it needs a PID :-), but the built-in kill will work. [131013860050] |Btw., which is an external command (at least it is not internal in bash), so it cannot list internal commands. [131013860060] |For instance: [131013870010] |Clearing GNU Screen after full-screen application [131013870020] |When working at a normal xterm (not sure about a "real" terminal), when a full-screen program such as man or vim is closed, it disappears, leaving your screen so you can see your prompt, and previous prompts including where you launched the program that closed. [131013870030] |When I am running within GNU Screen, however, when the program is closed it does not clear but is simply shifted up so a prompt can be displayed. [131013870040] |To me this is ugly, and I'd like to know if "normal" behaviour can be resumed. [131013870050] |I realise I could manually clear the screen myself but a) I don't want to and b) that would result in a totally clear screen, not what I'm after (though perhaps better, if this is as good as it gets). [131013880010] |This is related to your termcap settings for screen. [131013880020] |Maybe you can try starting screen with the -a command line option. [131013890010] |Some terminals, such as xterm, support what is known as an “alternate screen”: there are separate screens for full-screen programs and for scrolling programs. [131013890020] |In xterm, you can switch between the two screens with the “show alternate screen” command at the bottom of the Ctrl+mouse 2 menu. [131013890030] |This behavior is disabled by default in screen but can be enabled with the altscreen option: add altscreen on to your ~/.screenrc. [131013900010] |Gilles is correct: [131013900020] |altscreen on [131013900030] |in your .screenrc or /etc/screenrc (the latter if you want this behavior for all users) will do what you describe. [131013900040] |This is also the case when screen is used from a 'real' terminal. [131013900050] |Here is the terse description in the docs: gnu.org [131013910010] |Resources for very low-level (board bring-up) [131013910020] |I've worked with a few embedded systems, but now I'd like to make my own piece of hardware and despite a pretty thorough knowledge of Linux, I have no idea how to get Linux up and running on new hardware. [131013910030] |So I'm looking for resources on how to do some board bringup/support. [131013910040] |Some more details: I'm wondering about the following kinds of things: How does Linux know the processor configuration - e.g. how the pins are configured, how much cache is there, is there an MMU present. [131013910050] |How does Linux know about the board layout - e.g. which pins are the memory bus, where is the row select, column select, which pins are an i2c bus, and so on. [131013920010] |A relatively popular boot loader for embedded devices is Uboot: [131013920020] |http://www.denx.de/wiki/view/DULG/Introduction [131013920030] |http://sourceforge.net/projects/u-boot/ [131013920040] |The Uboot project originated from Germany; Uboot sounds like submarine in German, so the name sounds somewhat funny for German ears. [131013920050] |I hope I didn't tell you something obvious. [131013930010] |I worked on a system using uboot which had custom hardware and was ported to arm and powerpc. [131013930020] |There were two things that needed to be set up. [131013930030] |First there is a place in u-boot where you can add board support to set registers and create handler functions for accessing RAM or FLASH on your device. [131013930040] |You then have to write similar support in the /arch part of the linux tree. [131013930050] |I think keywords to google for are "board support" [131013940010] |How to test swap partition [131013940020] |I'm trying to diagnose some random segfaults on a headless server and one thing that seems curious is that they only seem to happen under memory pressure and my swap size will not go above 0. [131013940030] |How can I force my machine to swap to make sure that it is working properly? [131013950010] |Is this linux? [131013950020] |If so you could try the following: [131013950030] |And then either use a program(s) that uses lots of RAM or write a small application that just eats up ram. [131013950040] |The following will do that (source: http://www.linuxatemyram.com/play.html): [131013950050] |I added the sleep(1) in order to give you more time to watch the processes as it gobbles up ram and swap. [131013950060] |The OOM killer should kill this once you are out of RAM and SWAP to give to the program. [131013950070] |You can compile it with [131013950080] |where filename.c is the file you save the above program in. [131013950090] |Then you can run it with ./memeater. [131013950100] |I wouldn't do this on a production machine. [131013960010] |Development environment for C [131013960020] |Looking for ideas on setting up a convenient and productive development environment for C development. [131013960030] |I found C editing with Vim very helpful but I would like to get a wider sampling of suggestions. [131013970010] |I persisted with Vim for a while, it's worth knowing the VIM basics as you will always find a UNIX box somewhere that only has that, but I tried Emacs and haven't looked back. [131013970020] |Eclipse is a 'modern' alternative, I have all three on my system ! [131013980010] |You can use Netbeans C/C++ Package that work with G++/GCC : [131013980020] |netbeans.org/features/cpp [131013990010] |It's very much a personal preference, so I don't think I can do much more than tell you what I use. [131013990020] |I have Emacs set up with Flymake mode, which periodically compiles the file you're working on and parses the compiler output to figure out what errors you've made. [131013990030] |It hilights the errors/warnings in the buffer, and shows the associated compiler error message [131014000010] |my personal favorite is exVim. [131014000020] |Its have lots of vim plugins which makes it very easy to use with large code base. [131014000030] |I till take approx 1 day to learn its features but it will be worth it. [131014010010] |I edit C with Vim in the console. [131014010020] |I employ makefiles and have a number of compilers to test my code against, including gcc, clang (LLVM) and icc. [131014010030] |Other things I consider part of my development environment: the use of grep, debuggers and valgrind. [131014010040] |A scripting language for more complicated builds. [131014010050] |Git for version control. [131014010060] |More important in my mind than what you use to edit the code is how you structure your code. [131014010070] |How to lay that out is probably a question for Stack Overflow, but, as you asked, I often have a separate directory for object code not for distribution and another folder for the resultant binar(y|ies). [131014010080] |I have a testing folder which contains more C files that use all the generic code I'm writing and these I valgrind, along with the final project file. [131014020010] |
  • Emacs/Vim/Eclipse/... - Personally I am an Emacs user. [131014020020] |If you find the control sequences tire your pinky, just Viper-Mode it up. [131014020030] |Emacs is so well integrated into unix, making it very easy to control everything all from one place. [131014020040] |Vim also does a good job here, but I find Elisp to be a much more powerful extension language than Vim Script. [131014020050] |One could talk for hours about all of the ways to set up Emacs for C development. [131014020060] |Flymake Mode has been mentioned, and is a super start to things. [131014020070] |I am not familiar with Eclipse, I do not find it leaves enough room on my screen for code, nor do I like how bloated it is(Vim users will say the same thing about Emacs). [131014020080] |I am also unfairly biased against anything written in Java, for purely aesthetic reasons.
  • [131014020090] |
  • Ctags - Tags your C(or many other language) functions so that Vim or Emacs or whatever can do a little bit of hyper-text linking in your files. [131014020100] |Say you're wandering around and you see a function and you're scratching your head saying "What does that one do again? [131014020110] |The naming is a bit vague." [131014020120] |Plink-plank-plunk, you can zap straight to it's definition.
  • [131014020130] |
  • Cmake/Gnu-Autotools - Make is great, but at some point you need to abstract things a bit so that your project can build itself on all sorts of systems you have no way of testing for. [131014020140] |If you only need people to build your code on a *nix, Autotools is great, but realy you should familiarize yourself with Cmake anyway. [131014020150] |The Cmake team build code in every conceivable configuration possible and make sure that you don't have to go through the headache. [131014020160] |If you want your project to be easily picked up buy others, one of these tools is crucial.
  • [131014020170] |
  • Git/Mercurial/Subversion/... - You could spend months researching version control software, but you should probably just go with Git. [131014020180] |It's solid, it's distributed, the @$!#%& Linux Kernel is tracked with it. [131014020190] |If it's good enough for Linus, it's got to be good enough for you. [131014020200] |I also hear good things about Mercurial, apparently G**gle uses them, so it's probably not bad. [131014020210] |Some people appear to like Subversion and CVS and whatnot. [131014020220] |I don't like them because they are monolithic, which to me is very inconvenient and limiting.
  • [131014020230] |
  • Stumpwm/wmii/XMonad/... - At some point you're going to realize that anything you can do to keep your work flowing is going to greatly improve your output. [131014020240] |One of the best ways to keep your brain from breaking it's context is to switch to tiling, KEYBOARD DRIVEN window managers. [131014020250] |I am a personal fan of StumpWM, the Emacs of window managers. [131014020260] |Fully implemented in an on-the-fly customizable Common Lisp process, anything you find yourself doing repetitiously can be banished away into functions and bound to commands. [131014020270] |Great stuff. [131014020280] |I don't know much about any of the others, but maybe further elaboration is better left to another thread. [131014020290] |USE KEYBOARD AS MUCH AS POSSIBLE.
  • [131014020300] |
  • GDB - I'm not familiar with other debuggers, but this appears to be the de-facto standard.
  • [131014020310] |
  • Valgrind - I don't know of anything else that does what this does so well. [131014020320] |Valgrind is crucial for all those pesky profiling/memory leak hunts you want to go on. [131014020330] |You just can't write code with malloc/calloc without Valgrind.
  • [131014030010] |I use Kate(text) gcc/avr-gcc and make, with Git as VC. [131014030020] |I mainly do embedded stuff in c and computer side in python. [131014040010] |You can try Motor IDE. [131014040020] |It's curses based so you should feel like at home (tm) .) It's also a bit sad because it's not being maintained for 5 years now so somethings might be broken. [131014040030] |Although still - I believe it's worth a try. [131014050010] |I use gedit with embedded terminal. [131014060010] |If you're doing C development under Unix/Linux, you absolutely have to be using Cscope if the project is any significant size. [131014060020] |Cscope is a developer's tool for browsing source code -- jump to function foobar's definition, find all places where the variable foo is referenced, find all files including bar.h, change all occurrences of bar into baz, etc. [131014060030] |Also, you mentioned Vim in your post... here is a tutorial on using Vim &Cscope together. [131014070010] |How do I switch from an unknown shell to bash? [131014070020] |I was surprised that I didn't find this question already on the site. [131014070030] |So, today $ came up after I logged in as a new user. [131014070040] |This was unexpected because my main user's prompt starts with username@computername:~$. [131014070050] |So, how do I switch from this other shell to bash? [131014080010] |you type in bash. [131014080020] |If you want this to be a permanent change the default shell to /bin/bash by editing /etc/passwd. [131014080030] |here's some snipets from my /etc/passwd the very last field contains the shell, Modifying the field after the last : to a valid or invalid shell will work. /bin/false and /sbin/nologin both mean the user doesn't have a real login shell, although if pam is not set up right this doesn't mean they can't login (I reported a bug on this in Arch Linux, because you can login graphically without having a login shell). /bin/bash and /bin/zsh are both valid shells, see /etc/shells for a list of valid shells on your systems. [131014080040] |Here's my /etc/shells if you're interested. [131014080050] |Yes you can use chsh or usermod to do the same things, please remember these are just structured text files, and TIMTOWTDI. [131014090010] |Assuming the unknown shell supports running an absolute command, you could try: /bin/bash [131014090020] |To change the default shell, I would use chsh(1). [131014090030] |Sample usage: chsh -s /bin/bash your_user [131014100010] |If chsh or manually editing the config won't work, but a ~/.profile script is executed at login, add this line: [131014110010] |Grab certain contents of a file [131014110020] |So I know tools exist for this problem because I've heard about them, but I don't know what they are. [131014110030] |I want to do something like filter out all data but the usernames in /etc/passwd. [131014110040] |For example, I would want to grab user1, user2, and user3 from the following file. [131014110050] |In this case, the logic could be "Grab text up to the first ':' on each line of the file". [131014110060] |The output would be: [131014120010] |Try something like: [131014130010] |cut exists for exactly this purpose. [131014130020] |The -d flag specifies the delimiter, and -f specifies which fields to output: [131014130030] |The argument to -f can be something like 1,3 to show the first and third fields, or 1-3 to show the first three; there are also -b and -c flags to read bytes and characters instead of fields. [131014130040] |If you need something more flexible, generally awk will do the trick (see Matthew's answer) [131014140010] |Here's a Perl one-liner: [131014150010] |Passing an argument to multiple commands in a single line [131014150020] |I'd like to be able to run multiple commands on the same file in a single line. [131014150030] |The way I currently do this is: [131014150040] |My gut instinct tells me that there should be a way to provide the filename argument only once and pipe it to both commands simultaneously through xargs or something similar: [131014150050] |When I try this, only the first command runs. [131014150060] |How can I achieve what I want to do? [131014160010] |And the commands are executes in parallel, if this it is, what you want to do. [131014160020] |Else replace the &chars. [131014170010] |For shells such as Bash, Korn and Z that have process substitution, you can do this: [131014180010] |I wouldn't vote for this myself. [131014180020] |It's silly and dangerous, but just in the interest of listing the ways to do this, there's: [131014180030] |for cmd in "commandA" "commandB" "perl -ne '...'" ; do eval $cmd file ; done [131014190010] |You can use xargs to construct a command line e.g.: [131014190020] |Just pipe the above into bash to run it: [131014190030] |Extending the example to all the *.c files in the current directory (escaping the ls here to prevent any shell alias substitution): [131014200010] |Comparing two files in Vim [131014200020] |Is it possible to view two files side-by-side in Vim? [131014200030] |If so, how can I set up my editor to do this, and is there a way to diff between the two files within Vim? [131014200040] |I am aware of the :next and :prev commands, but this is not what I'm after. [131014200050] |It would really be nice to view the two files in tandem. [131014210010] |Open the side by side view: [131014210020] |Change between them: [131014210030] |Checkout the vimdiff command, part of the vim package, if you want a diff-like view.. [131014220010] |Linux partition table [131014220020] |What kind of partition table does Linux create by default? [131014220030] |Is it msdos? [131014220040] |Is it different depending on the Linux distribution used(I'm using Ubuntu)? [131014220050] |Is there any command line utility I could use to find out that information? [131014230010] |Unlike Microsoft Windows drive letters (C:,D: etc.), partitions on Linux show up as device files (/dev/sda1, /dev/sda2, /dev/sdb1 etc.). [131014230020] |You can create the root directory on any one of the partition (as long as the partition is large enough) or spread it across several partitions (recommended). [131014230030] |In modern Linux distributions the filesystems you will find most often are ext2 and ext3; but they will also support read/write from NTFS and FAT32. [131014230040] |Run fdisk -l as root to see how your disk has been partitioned. [131014240010] |open up a terminal and list your drives first: [131014240020] |output similar to this: [131014240030] |From this you can see Disk /dev/sdb, and /dev/sda as disks. [131014240040] |To view partition table of one do: [131014240050] |Press "p" to list partitions or m for help. [131014240060] |From here you can modify partition tables, and when you are all finished press "w" to write changes to disc. [131014240070] |Next if you create a new partition, lets say ext3 you will need to use something like mkfs or a GUI based tool to create a ext3 partition there. [131014250010] |There is no default partition format for Linux. [131014250020] |It can handle many of popular and less popular formats. [131014250030] |The type is determined by the tool you are using. fdisk can handle standard MS-DOS partition tables while parted can handle GUID partition tables as well. [131014250040] |You can create other tools for any format you like. [131014250050] |Most distributions will create MS-DOS partitions on standard PC and possibly use GUID tables on Macs for simple reason - Windows cannot boot from GUID partition table with BIOS (which is on standard PC) - only EFI (which is on Macs). [131014250060] |As for second part - fdisk -l presented will print standard partitions (thosed used on MS DOS). parted -l will show all "partitions" - including for example LVM logical volumes. [131014250070] |EDIT: If you want to bump partition table (binary) use dd if=/dev/your_disk count=1. [131014260010] |If you install Linux on a PC, the installation program will create one or more partitions in a format that is compatible with DOS, OS/2 and Windows, because that's the de facto standard for partitions on a PC. [131014260020] |If you install Linux on some other kind of hardware, the installation program may use a different partitioning scheme. [131014260030] |Linux supports a lot of different schemes (you can see them all in the kernel configuration — search CONFIG_.*_PARTITION in /boot/config-*). [131014260040] |Even on a PC, you might see other partition types for a variety of reasons: because you went out of your way to create them, because you inserted a disk from some other architecture, because you have another operating system that uses different partition types (e.g. *BSD, Solaris). [131014270010] |X-based email for reading mail from cron jobs [131014270020] |Can anyone give a recommendation for a simple X-based equivalent of mail / mailx for reading the mail I get from cron jobs? [131014280010] |Evolution can be set up to read system mail. [131014280020] |When setting up an account, simply choose "Local Delivery" as the server type on the "Receiving Mail" portion of the setup. [131014280030] |Then, on that same page, set the "Path" to /var/mail/ or wherever your mail spool is. [131014280040] |The "Local Delivery" option will delete the mail from /var/mail/. [131014280050] |If you use the "Standard Unix mbox spool file" will not. [131014280060] |Whether or not you want to call Evolution "simple" is another matter. [131014290010] |You could write a shell script that calls a command-line mail tool and then uses notify-send, (g)xmessage or zenity to display the individual messages. [131014300010] |Evolution is fine but heavy. [131014300020] |Sylpheed (http://sylpheed.sraoss.jp/en/) is simpler and dont use as many resources. [131014310010] |If you don't want Evolution nor Sylpheed, Thunderbird also allow to get /var/mail mail. [131014310020] |You just have to create a new account and choose "movemail" as account type. [131014310030] |See mozilla wiki for more info. [131014320010] |What does 'patch unexpectedly ends in middle of line' mean? [131014320020] |This is the output of my patch command: [131014320030] |The command was [131014320040] |The patch was produced using git: [131014320050] |What does patch unexpectedly ends in middle of line mean, and is it a problem? [131014320060] |Is it referring to hunk 16 or 17? [131014320070] |What can I look for in the patch file to figure out what's causing this? [131014330010] |The message refers to Hunk 16. [131014330020] |This git hub discussion is probably related to your issue. [131014340010] |vi mode doesn't display correctly on new term [131014340020] |This is a continuation of my previous questions. [131014340030] |I currently have the following in ~/.zsh.d/functions.sh [131014340040] |and relevenat sections from my ~/.zshrc [131014340050] |This works right, except that it doesn't display -- INSERT -- when the shell is first started, after that it works as expected. [131014340060] |other improvement suggestions welcome [131014340070] |Update I decided to work around the issue, though I still don't know why it works. [131014340080] |For now I've set psvar[1] to -- INSERT -- out of the box, since I know it will be insert on start. [131014340090] |This doesn't feel like a proper resolution though. [131014350010] |In my hacker it appears I fixed it by setting [131014350020] |before doing anything else with my vimode... [131014350030] |I"m not sure I'm 100% satisfied with my solution, but it functions. [131014360010] |change prompt color depending on user or root in zsh [131014360020] |in zsh you can have a %# in your PS1 (or whatever PROMPT variable) which basically means display % if user or display # if root. [131014360030] |I'm wondering if there is any way to affect this so that the % or # changes colors depending on whether it's a user or root (a red for root, a blue for user ) the obvious way is just to change the PS1 in my root's ~/.zshrc but considering this is already a special symbol I'm wondering if there isn't perhaps a way I can use the same PS1 for both... something specific to %# like it is for zsh ( I'm sure there are other hacks I could do too like an if then statement ). [131014370010] |%(!.%{\e[1;31m%}%m%{\e[0m%}.%{\e[0;33m%}%m%{\e[0m%}) [131014370020] |That should work to change the hostname (%m) a different color (red) if you are root. [131014370030] |I don't have a zsh shell to test it on but it looks correct. [131014370040] |Here's why: [131014370050] |%(x.true.false) :: Based on the evaluation of first term of the ternary, execute the correct statement. '!' is true if the shell is privileged. [131014370060] |In fact %# is a shortcut for %(!.#.%). [131014370070] |%{\e[1;31m%} %m %{\e[0m%} :: the %{\e[X;Ym%} is the color escape sequence with X as formatting (bold, underline, etc) and Y as the color code. [131014370080] |Note you need to open and close the sequence around the term you are looking to change the color otherwise everything after that point will be whatever color. [131014370090] |I've added spaces here around the prompt term %m for clarity. [131014370100] |http://www.nparikh.org/unix/prompt.php has more options and details around the color tables and other available options for zsh. [131014380010] |%(# tests whether the shell is running as root. [131014380020] |Changing this to %(! tests whether the shell is running with elevated privileges (which covers things like newgrp, but not logging in as root). [131014390010] |ipfw : Traffic Shaping [131014390020] |i'm not sure what but it seems like i'm doing something wrong... my objective is to be able to limit some of my traffic, to be exact www traffic, one of my clients running what it's called a webproxy, where end-user can surf any webpages through their site, if anyone interested take a look at: [131014390030] |http://www.thespacesurf.com/ [131014390040] |so here here is my /etc/ipfw.rules file followed by ipfw show and ipfw pipe show [131014390050] |according to mrtg i'm doing way over 1Mbit/s that i set in my ipfw. [131014390060] |I'll be more then happy to provide with whatever additional information needed to solve this issue, but for starters [131014400010] |Hi, [131014400020] |at first, please check if net.inet.ip.fw.one_pass is set. [131014400030] |Second, I don't think you need that mask parameter on you pipe configuration. [131014400040] |You can't always be sure that packages come from port 80 e.g. if a user communicates from behind a nat etc. [131014400050] |Third, I would try it without rules 200 and 300. [131014400060] |I'm not quite sure how it's handling piping internally, but the Traffic Shaping Section of ipfw(8) has these tips enlisted: [131014400070] |CHECKLIST Here are some important points to consider when designing your rules: [131014400080] |tions need packets going in both directions. [131014400090] |console when doing this. [131014400100] |If you cannot be near the console, use an auto-recovery script such as the one in /usr/share/examples/ipfw/change_rules.sh. [131014400110] |And fourth I would change the default rule (= the last rule 65000) to deny all. [131014400120] |It's good firewall design and without it, all these other allow rules are just wasted ;) [131014410010] |DVD drive constanly spins up/down when idle [131014410020] |DVD drive constantly spins up/down when idle on notebook. [131014410030] |I cannot track what is causing it. [131014410040] |The process of spinning up/down is quite noisy and disturbing. [131014410050] |lsof cannot find anything that have device file opened or anything that keeps any file opened. [131014410060] |It is some regression but I don't see any update that might have caused it. [131014410070] |Drive is mount/unmount by usdisk. [131014410080] |EDIT: To answer: [131014410090] |
  • It is regression I need to pinpoint
  • [131014410100] |
  • There is no errors in dmesg
  • [131014410110] |
  • It happens when disk is mounted
  • [131014410120] |
  • It worked in past with pooling
  • [131014410130] |
  • The only recent chang was rebuild of udisks with patch named "Fix long hangs on probing nonexistant floppies."
  • [131014420010] |A common cause of this behavior is device polling by deamons like hald and udevd. [131014420020] |You can temporarily disable polling by hald and udev to see if these may be the cause in your case. [131014420030] |If you are running hald: [131014420040] |Note your DVD drive may not be symlinked /dev/dvd, it could be /dev/sr0, /dev/dvd0, etc. [131014420050] |If you get an error like "Cannot find storage device /dev/dvd", you can try one of the other device names. [131014420060] |Now temporarily stop polling by udev with this: [131014420070] |(this command will appear to hang - it is disabling polling until you hit Ctrl-c) [131014420080] |If this stops your device from frequently spinning, see my notes below about making the udev rule changes. [131014420090] |If disabling hald and udisks polling makes no difference, then you can re-enable hald control of the device with this: [131014420100] |Disabling polling of your DVD device may have the side effect of requiring you to mount optical media manually. [131014420110] |Your drive may continue to provide notification to udev that it should take some action upon media insertion. [131014420120] |Making udisks ignore your optical device is a simple rule. [131014420130] |I put mine in /etc/udev/rules.d/99-device-polling.rules: [131014420140] |Get your vendor and model strings from the output of udisks --show-info /dev/dvd. [131014420150] |Make the changes active by running udevadm trigger, then re-examine udisks --show-info /dev/dvd and note the line "detection by polling:" - it should be 0.