[131087800010] |Filtering paths to a specific depth [131087800020] |I'm using rsync and the flags -nPaAXz ~/ to check which files are going to be copied. [131087800030] |This is far too verbose to make any sense of. [131087800040] |How could I filter the output so I view the list of files/folders that are going to be copied to a certain depth, eg: [131087800050] |1 folder deep [131087800060] |2 folders deep [131087810010] |In this case, you just want to filter the output, and you don't actually care about reusing the values in any way. [131087810020] |That's easy, then: just grep out lines that contain more slashes than you want. [131087820010] |The following grep will restrict rsync's verbose output to a depth of 2 folders: [131087820020] |Change the {0,2} above to {0,3} to filter out 3 folders deep. {0,4} will filter out 4 folders deep etc. [131087830010] |This command takes each path and truncates it to n folders deep (defined in the \{0,n\} section of the sed command and the {0,n} section of the grep command). [131087830020] |It's then piped into uniq to filter out the duplicates. [131087830030] |rsync -nPaAXz src_dir dst_dir | sed -n 's@^\(\([^/][^/]*/\)\{0,2\}\).*@\1@p' | uniq [131087830040] |The same thing can also be achieved using grep: [131087830050] |rsync -nPaAXz src_dir dst_dir | grep -oE "^([^/]+/){0,2}" [131087830060] |Although the above wont work with GNU grep versions <2.5.3 due to a bug. [131087840010] |Something like pwgen-win for linux? [131087840020] |Ok, is there some kind of pwd generator for linux like this one for Windows that has a nice GUI and options like this win-version has? [131087840030] |I know that there is pwgen but as far as I know it does not support mouse/keyboard entropy and it does not have any kind of front end GUI-s.. or does it? [131087840040] |http://pwgen-win.sourceforge.net/ [131087840050] |thank you [131087850010] |KeePass (and KeePassX) are password safes but also allow you to generate passwords using keyboard and mouse entropy. [131087860010] |Well, pwgen for Linux uses /dev/urandom. [131087860020] |It's not so bad (/dev/random would be better). [131087860030] |A better one is APG (http://www.adel.nursat.kz/apg/) that optionally uses /dev/random and asks for keyboard input to randomize even more. [131087860040] |If you want a GUI there's jpasswordgen (http://sourceforge.net/projects/jpasswordgen/) in Java, so it works everywhere. [131087860050] |IMHO, I don't see usefulness in using a GUI for this. [131087860060] |The goal is to get passwords, and pwgen/apg can even generate nicely formatted lists of passwords. [131087870010] |You could also give Duckduckgo a try. [131087870020] |It is basically a search engine with many nice features, i.e. generating passwords: strong password with 17 chars. [131087870030] |The best thing is: You can use it with any operating system. :-) [131087880010] |What Linux distribution(s) contribute most to the Gnome project? [131087880020] |What Linux distribution(s) contribute most to the Gnome project (all sorts of commits)? [131087880030] |If I plan to contribute to the Gnome project, am I advised to use THAT distribution as my primary development platform or does my choice remain a simple matter of taste? [131087880040] |Thanks. [131087880050] |EDIT [131087880060] |I'm a Linux user and my question concerns only Linux distributions, hence any other platform that supports Gnome does not fit with my requirements. [131087890010] |If you're contributing to Gnome, what distribution (or even OS, as long as it supports Gnome, FreeBSD and Solaris being examples) you use is irrelevant. [131087890020] |Distributions supporting Gnome generally do so either by having employees/official developers dedicating time to Gnome development or by funding (or more likely both, for commercial distributions). [131087900010] |By lines of code, the answer is unequivocally Red Hat, as shown in last summer's Gnome code census. [131087900020] |That means Red Hat Enterprise Linux, or else Fedora. [131087900030] |But, that metric isn't necessarily completely fair. [131087900040] |Other companies like Canonical contribute in other ways that are also valuable. [131087900050] |There was a huge controversy with much yelling and flaming, and good and bad points on both sides. [131087900060] |As for your own use, I don't think this is necessarily a compelling reason to choose a distribution, even if you're developing for Gnome. [131087900070] |The desktop environment is portable enough that it shouldn't matter (and, in fact, cross-distro development differences can help make the project stronger). [131087910010] |As mentioned, Red Hat develops more of GNOME than any other company. [131087910020] |Since Fedora is a bleeding edge sort of distribution, it happens a lot that it receives GNOME updates real fast, many times even before tarball releases (priorities). [131087910030] |So if you are a GNOME developer, chances are you want to be running the development branch of Fedora. [131087910040] |Alternatively, and if you don't mind working hard, you can resort to choosing a distribution based on taste, and build GNOME by using JHBuild. [131087910050] |This is a powerful tool that can build all of the various GNOME suites, either from tarballs or from GNOME VCS, git. [131087910060] |This implies that more often than not, you'll be having a fresher version of GNOME than if you go the Fedora way. [131087910070] |Note that there's hardly a build of JHBuild that goes without trouble but the bugs you'll expose on various GNOME modules (equivalent of a distro package) and the learning you'll do along the way will help make you understand GNOME even better. [131087910080] |You'll want to be chilling on #gnome-love channel on GimpNet for you'll need the help. [131087910090] |Another kind of GNOME developer is one who is interested in a very specific GNOME package, as in my case. [131087910100] |That is, I always run the VCS version, so I don't even bother using JHBuild. [131087910110] |Note that this way I have to deal with a variety of issues, for example sometimes it requires a later dependency than is available in my primary OS, Debian, and am forced to also get either an upstream tarball or a VCS checkout. [131087910120] |This could mean that you'll have to risk the stability of your system, at least as vetted by your distro developers, and this can bite you here and there, especially if you are going to be playing with important technologies like GLib and DBus. [131087910130] |For me this isn't a big issue since a re-install is cheap, and I separate my "/home" dir and the rest via partitioning. [131087910140] |One other option is to install this custom stuff in "/opt" which is maybe what I must do :) [131087920010] |How to connect to OpenVPN with NetworkManager [131087920020] |I have installed an OpenVPN server on my OpenWrt 10.03 router [freshly flashed]: https://pastebin.com/raw.php?i=hRECWuf1 [131087920030] |It seems "ok". [131087920040] |I connect my pc to the lan port of the router, and i want to try it out. [131087920050] |I'm using Fedora 14 with GNOME. [131087920060] |In the NetworkManager applet i set these things: this and this. [131087920070] |Ok! [131087920080] |I try to connect, but it fails. [131087920090] |Here are the logs: https://pastebin.com/raw.php?i=gv2xChxW [131087920100] |one important thing: my routers [the one with the openvpn server] ip address is 192.168.1.2, and i didn't had to write it nowhere. so how could the networkmanager applet know the ip address of my openvpn server? i think this is the problem, but i just can't find where to write 192.168.1.2 [131087920110] |p.s.: yes, i tried to google for: "No server certificate verification method has been enabled." but i didn't find a thing, and i'm trying for hours now... :\ [131087920120] |p.s.[again..]: yes, i tried to google for: "No server certificate verification method has been enabled." but i didn't find a thing, and i'm trying for hours now... :\ [131087920130] |p.s.: If I do this on the router: [131087920140] |and stat tcpdump, and try to connect from my pc, nothing happens!.. so the bug is in the networkmanager applet!? [131087920150] |again, p.s.: if I do this: [131087920160] |Are there any good howtos about setting this kind of openvpn [as in the pastebin link, on an openwrt router..]? [131087920170] |Is it worth finding another openvpn client program? [other then the networkmanager applet] [131087930010] |So where points that address you typed in a gateway field? [131087930020] |I've never configured OpenVPN using NetworkManager but i suppose that is the place where you should provide the address of your router. [131087930030] |And in your log file there is a line which says: [131087930040] |And after that: [131087930050] |It looks like a SELinux related problem. [131087930060] |SELinux denies access to your certificate file. [131087930070] |I would search here or here how to set rules for SELinux. [131087930080] |For sure you should set read permission for certificate files to make openvpn able to read them. [131087930090] |I don't use SELinux based system, so i can't check what rules u should use. [131087930100] |About telnet: In your config you set 1194 port for openvpn, but you are trying to connect on 1197 port using telnet. [131087940010] |How to build all of Debian [131087940020] |Let's assume: [131087940030] |
  • I have a mirror of the source repository locally.
  • [131087940040] |
  • I only want to build for the architecture I'm running, which is i386 in my case.
  • [131087940050] |
  • I'm not interested in customisation. [131087940060] |In fact, I want the resulting binary packages to be as close to official ones as possible. [131087940070] |From there on, I'll take care of how they are organised. [131087940080] |I think I'll use reprepro.
  • [131087940090] |
  • I want it to be as easy as possible.
  • [131087950010] |Maybe something like this? [131087950020] |I know you said that you have the packages already, but I'm not sure what format they are in. [131087950030] |If you already have all the packages properly unpacked in your cwd then apt will skip that package and move onto the next one. [131087950040] |You can also pass -s to apt-get to test the commands. [131087950050] |Almost all packages should build in one shot like this, if you're using the same release that you're trying to build. [131087950060] |You will need lots of storage, and lots of time. [131087960010] |Debian already does this internally, to build the binary packages that they offer for download, and most of the tools is in the distribution. [131087960020] |(Everything except some glue scripts, I think.) [131087960030] |Tools that may interest you include: [131087960040] |
  • debian-builder: rebuild packages from source code
  • [131087960050] |
  • buildd, buildd: automatically build some packages
  • [131087960060] |
  • pbuilder: personal package builder (in a chroot, mainly intended for Debian developers)
  • [131087960070] |
  • More generally, look at packages tagged devel::debian or works-with::software:source (aptitude search '?tag(devel::debian) | ?tag(works-with::software:source)').
  • [131087970010] |Is it possible to source again .bashrc and .zshrc AND remove functions once inside them without restarting? [131087970020] |I had a function inside .zshrc that I removed. [131087970030] |Now, when I try to source it, it indeed sources it, but doesn't remove the function that once was inside .zshrc from memory. [131087970040] |Is there a way to remove the function (now I believe in memory, in zsh namespace or something like that) without restarting my machine? [131087980010] |In zsh, you can remove a function with unhash -f functionname or unfunction functionname. [131087980020] |That doesn't automatically clear functions you've removed from a given startup file, though, because of course the shell doesn't remember where it got it from in the first place and attribute any special meaning to re-sourcing the same file. [131087980030] |So you'll have to know what you want to forget. [131087980040] |Since you mention .bashrc in the subject: the bash equivalent is unset -f functioname [131087990010] |Resources to learn Linux? [131087990020] |Possible Duplicate: Good Introductory resources for linux [131087990030] |I have no experience with any Linux operating system and I just put Ubuntu on my MacBook Pro. [131087990040] |What would you professionals recommend a beginner do to learn how to use this OS to its full potential? [131087990050] |I've come across this - http://www.linuxlots.com/~jam/ [131087990060] |Ideally when I learn to program I want this to be my development machine. [131087990070] |Thanks. [131088000010] |How do I install mplayer from a terminal? [131088000020] |I need a command to run on Linux to install mplayer. [131088000030] |I'm not finding the command anywhere, and when I try to use the zypper command to install mplayer it tells me I am not a sudoer. [131088000040] |What should I do? [131088010010] |Installed Fedora in dual boot Windows desktop. Now I can't get full monitor resolution. [131088010020] |I have a 1900x1080 resolution monitor, and after installing Fedora to create a dual boot machine, the maximum resolution Fedora 14 (previously only Windows 7 was installed) can achieve is 1280x1024. [131088010030] |Why is this the case? [131088010040] |How do I figure out what to do to get full native resolution on my monitor in Fedora? [131088020010] |Most likely you don't have the driver installed. [131088020020] |You have 2 choice: [131088020030] |
  • Use a proprietary driver. [131088020040] |You can get proprietary drivers through the manufacturer's website, in this case it's the AMD's download page. [131088020050] |Some distros also have proprietary drivers in the repositories. [131088020060] |For Fedora check out this Unofficial Fedora FAQ.
  • [131088020070] |
  • Use an opensource driver. [131088020080] |This is often not as good as the proprietary one, unless the manufacturer has provided the specifications to opensource developers, in which case it is better. [131088020090] |Look into your distro's documentation on how to get opensource drivers. [131088020100] |I don't know if this exists for Fedora, though.
  • [131088030010] |So, this appears to be a really new graphics card. [131088030020] |You'll need both an up-to-date X driver and a really recent kernel — in fact, you need the not-yet-released (as of early March 2011) 2.6.38 kernel. [131088030030] |(See this article for more on the upcoming kernel release.) [131088030040] |The good news is that the pre-release 2.6.38 kernel is already in the tree for Fedora 15, and the Fedora 15 Alpha release is scheduled for tomorrow today (March 8th, 2011). [131088030050] |Get the release from http://torrent.fedoraproject.org/. [131088030060] |I can't promise that that'll make the card work, but the signs look positive. [131088030070] |I'm not sure if the needed driver code is in the F15 X.org drivers yet, but the quickest way to find out is to try it. [131088030080] |You can even get the Live Desktop CD option, which will let you test if it works without even reinstalling. [131088030090] |It's possible (likely even) that the required bits will make it into Fedora 14 in a few months. [131088030100] |So just waiting is another option. [131088030110] |(Honestly, I think either of those will be a better option than the proprietary binary driver. [131088030120] |I've had no end of trouble from that. [131088030130] |It's faster at 3D, so if top 3D performance is your main need, it might be worth it, but for general use, eh.) [131088040010] |Where is the trash directory for PCManFM and xfe? [131088040020] |I've got PCManFM and Xfe as graphical file managers in my Arch Linux with Openbox. [131088040030] |When I click on the Trash link in PCManFM I get an error saying "Operation not supported". [131088040040] |Question: Where do PCManFM and xfe put files you sent to the Trash? [131088040050] |Thanks. [131088050010] |You need to install gvfs to get PCManFM's Trash Can to work. [131088050020] |It stores the files in the FreeDesktop standard location: ~/.local/share/Trash/files [131088060010] |How can I find the application for a MIME type on linux? [131088060020] |Is there a linux API that can find the default application for a MIME type? [131088060030] |Then I can use this application to open a file. [131088060040] |I can not use xdg-open(url) because the file format is a wrapper format and shared mime-type can only tell the wrapper MIME type. [131088060050] |The embedded mime type can only be got from the wrapper file header. [131088060060] |The process would be like: 1. find embedded mime type 2. mime-open(embedded mime type, url) [131088060070] |Is it possible? [131088070010] |The xdg-mime command is used to query or set file associations. [131088080010] |Can shared-mime-info associate a MIME type to a desktop applications? [131088080020] |I can add a new MIME type in shared MIME-info, but how can I associates this MIME type with an application? [131088090010] |Use the xdg-mime command. [131088090020] |xdg-mime default application mimetype [131088090030] |Ask the desktop environment to make application the default application for opening files of type mimetype. [131088090040] |An application can be made the default for several file types by specifying multiple mimetype s. [131088090050] |The above is taken from man xdg-mime, slightly modified to copy the usage down from the SYNOPSIS. [131088100010] |if you just want to associate them directly, and not make them default, you can add them to [131088100020] |(system-wide), or [131088100030] |(per-user). [131088100040] |edit: using xdg-mime, as geekosaur suggests, might perhaps be more robust. [131088100050] |In this case, you would want [131088100060] |xdg-mime install [--mode mode] [--novendor] mimetypes-file [131088100070] |Adds the file type descriptions provided in mimetypes-file to the desktop environment. mimetypes-file must be a XML file that follows the freedesktop.org Shared MIME-info Database specification and that has a mime-info element as its document root. [131088100080] |For each new file type one or more icons with name major-minor must be installed with the xdg-icon-resource command in the mimetypes context. [131088100090] |For example the application/vnd.oasis.opendocument.text filetype requires an icon by the name of application-vnd.oasis.opendocument.text to be installed. [131088110010] |Gtk application (Gvim) rendering troubleshooting. [131088110020] |When I switch from gvim to another application and then after some time switch back gvim window appears blank with cursor blinking in the middle. [131088110030] |Sometimes toolbar and tabs look like a white space. [131088110040] |When I opening a new tab, tab bar doesn't refresh itself and shows the same tabs as before opening new tab. [131088110050] |When I resizing window everything goes back to normal. [131088110060] |How to fix gvim rendering problem in gentoo linux (or how to understand what causes this problem)? [131088120010] |gcc and g++ linker [131088120020] |I am not yet at the point of linking, but as soon as my project compiles I will face this issue: [131088120030] |
  • I have one c.o object file compiled by gcc from pure C code
  • [131088120040] |
  • I have one d.o object file compiled by g++ with extern"C" for C compatibility of functions that needs to be callable by the c.o
  • [131088120050] |
  • I have a lot of *.o object files compiled by g++ from pure C++ code that are called by the d.o part
  • [131088120060] |How should I link the whole as a one block shared library? [131088120070] |Using gcc or g++? [131088120080] |This library will then be used by Apache2 as a C module. [131088130010] |Can I limit a user (and their apps) to one network interface? [131088130020] |As the title really. [131088130030] |I've actually got two scenarios to apply this: [131088130040] |
  • Multiseat Desktop: two network connections both with internet gateways and two accounts doing bandwidth-intensive tasks on each. [131088130050] |I want to split them up so one account only uses eth0 and the second account only uses eth1.
  • [131088130060] |
  • Server: I have two IPs on a server and I want to make sure the mail user only sends email from the second IP (eth0:1 alias)
  • [131088130070] |The second can probably be IPTabled (I just don't know how) to route email traffic through that interface but the first will be dealing with all sorts of traffic so needs to be user-based. [131088130080] |If there is a user-based solution, I could apply this in both places. [131088140010] |I'm not sure that's possible for the first point. [131088140020] |You want to do some routing manipulation based on the userid of the user. [131088140030] |Last time I check I didn't see this possibility. [131088140040] |For the second point, that's not iptables that you want to use but iproute2 (http://lartc.org/howto/ and http://www.policyrouting.org/iproute2.doc.html for the complete doc). [131088140050] |It's the replacement for the ifconfig/route commands as they are considered obsolete. iproute2 allow yo to route packets according to its source. [131088140060] |That's what you want [131088150010] |You'll want to use the iptables owner module and perhaps some clever packet mangling. [131088150020] |owner This module attempts to match various characteristics of the packet creator, for locally-generated packets. [131088150030] |It is only valid in the OUTPUT chain, and even then some packets (such as ICMP ping responses) may have no owner, and hence never match. [131088150040] |--uid-owner userid Matches if the packet was created by a process with the given effective (numerical) user id. [131088150050] |--gid-owner groupid Matches if the packet was created by a process with the given effective (numerical) group id. [131088150060] |--pid-owner processid Matches if the packet was created by a process with the given process id. [131088150070] |--sid-owner sessionid Matches if the packet was created by a process in the given session group. [131088160010] |You could set up two virtual machines on the physical machine, and set up the network interface bridging so that one VM uses eth0 and the other VM uses eth1. [131088160020] |See the virtual box documentation section on bridged networking. [131088170010] |Git Server Bash and SSH [131088170020] |I want to use Git over SSH with my Linode VPS running Ubuntu 10.04 LTS. [131088170030] |Instructions seem pretty easy, infact given I had already setup my SSH key etc.. [131088170040] |All I had to do was instruct my local repository to send to the server [131088170050] |ssh://matt@myvps.com:22/~/mygits/proj1.git [131088170060] |Problem is when I do git push origin master it just stall, no network activity, no errors, after a few minutes I kill it with ctrl+c. [131088170070] |I sent along time yesterday try to diagnosis the problem. [131088170080] |So on the server I setup a new user matt2 and copied authorized_keys across to matt2 and tried pushing to matt2@myvps.com:22 and viola it worked. [131088170090] |What are differences between matt and matt2? [131088170100] |Well matt has this in his .bash_profile to ensure an ssh-agent is running (i need this func alot): [131088170110] |So it appears my .bash_profile conflicts with the operating of git over SSH. [131088170120] |Any suggestions of workarounds, I dont want to use 2 user accounts, and I want to keep my .bash_profile. [131088170130] |It would be good if I could edit the .bash_profile and wrap the functionality around if [ $connectingWith != "git-client"] but I doubt such thing exists? [131088170140] |Thoughts? [131088180010] |Please ignore. [131088180020] |There was something in .bashrc which was causing problems. [131088180030] |I didnt see this as this was not executed when SSH in. [131088190010] |Fedora won't boot. The startup progress bar goes all the way to the right and then everything just freezes. [131088190020] |What could the problem be? [131088190030] |How do I find out what's going on? [131088190040] |EDIT: Hitting f2 when the progress bar is going across the screen shows that booting stops at "Starting atd: [OK]". [131088190050] |After it reaches that point, the screen flickers a little bit and it just hangs. [131088200010] |Sounds very much like there is a problem starting X, especially if you were messing around with xorg.conf. [131088200020] |Deleting xorg.conf should solve the problem, also you should check /var/log/Xorg.0.log for errors. [131088210010] |Ensure a process is always running [131088210020] |I started hosting sites a while back using Cherokee. [131088210030] |For external sources (FastCGI, etc) it has an option to launch the process if it can't find one running on the designated socket or port. [131088210040] |This is great because it means if PHP or a Django site falls over (as they occasionally do) it restarts it automatically. [131088210050] |On a new server using PHP-FPM I couldn't use Cherokee (it has a bug with PHP) so I've moved to NGINX. [131088210060] |I really like NGINX (for its config style) but I'm having serious issues with processes falling over and never respawning. [131088210070] |PHP does this sometimes but Django sites are more of a problem. [131088210080] |I've created init scripts for them and they come up on boot but this doesn't help me if they conk out between reboots. [131088210090] |I guess I'm looking for a FastCGI proxy. [131088210100] |Something that, like Cherokee, knows what processes should be running on which sockets/ports and respawns them on-demand. [131088210110] |Does such a thing exist? [131088210120] |Is there any way to build this into NGINX (for ease of config)? [131088220010] |How about http://cr.yp.to/daemontools.html? [131088230010] |I second the daemontools suggestion, but if you don't like the way DJB's software works (for whatever reason), there's also supervisord. [131088230020] |I set up a FreeBSD image a while back that used supervisord to manage nginx and gunicorn, which I used to host some simple WSGI apps, and the whole process was pretty straightforward. [131088230030] |If you're doing this for Django, Gunicorn makes it really straightforward to deploy Django apps, btw. [131088230040] |See this blog post for more details. [131088240010] |respawn in inittab [131088250010] |Have you considered god? [131088250020] |God is an easy to configure, easy to extend monitoring framework written in Ruby. [131088250030] |Keeping your server processes and tasks running should be a simple part of your deployment process. [131088250040] |God aims to be the simplest, most powerful monitoring application available. [131088250050] |I use it to make sure that if Rails/nginx instances fall over, they get revived, and although I don't see built in support for checking if it's using the right port or not, but if the problem is that the process fails or is no longer running, you can't go wrong with god. [131088260010] |Very simply you could just use cron to check for the process and restart [131088270010] |In addition to daemontools and supervisord, there's daemonize. [131088280010] |Another option could be to use monit, which is the one I generally use. [131088290010] |Simple answer - start, write your pid somewhere, and every x time (seconds, minutes, your bet) check if the process is up. [131088290020] |Long answer - all of the above are good methods. [131088290030] |But somewhat complicated. [131088290040] |Also keep in mind that being alive and answering to requests are different things. [131088300010] |Should the usage message go to stderr or stdout? [131088300020] |Should the usage message which is printed with e.g. [131088300030] |of a Unix command go to stderr or stdout, and why? [131088300040] |Should it go to the same place if the user makes a mistake with an option? [131088310010] |It should go to stdout, so you can type: [131088310020] |This is also recommended by the Gnu Coding Standards on --help. [131088310030] |On the other hand, the usage message that you get when you use an invalid option or omit a required argument should go to stderr, because it's an error message, and you don't want it to feed into the next command in a pipeline. [131088310040] |When you use --help, the usage message is the normal and expected output of the command. [131088310050] |Therefore, it goes to stdout, so it can be piped to another command, like less or grep. [131088310060] |When you say command --bogus-option | other-command, you don't want the usage message going to stdout, because it's now unexpected output that should not be processed by other-command. [131088310070] |Also, if the output of --help is more than a handful of lines, then the usage error message should only contain a summary of the --help output, and refer the user to --help for additional details. [131088320010] |Consumer Level software RAID5 and LVM [131088320020] |Hi I am going to build a 6 drive consumer level RAID 5 (10TB) with Ubuntu10.10 and using EXT4 as filesystem, and OS on another drive. [131088320030] |Question: Should you use LVM on top of the RAID5 or should just directly use EXT4 on top? [131088330010] |It would be a good idea to use LVM on top of RAID. [131088330020] |Then you can grow the RAID array and also grow the LV. [131088340010] |Unless you really need the whole 10 TB I'd rather recommend building 3 RAID 1 arrays and adding those to a LV. [131088340020] |The reason: The next time you are going to increase your storage capacity, you don't need to replace all 6 disks. [131088340030] |Remember that RAID 5 needs identical storage capacity of all drives. [131088340040] |You can instead replace just one of your RAID 1 arrays. [131088350010] |If you only need one filesystem in your RAID then there are not real advantages of using LVM. [131088350020] |Contrary, without LVM on top you get following advantages: [131088350030] |
  • reduced overall complexity
  • [131088350040] |
  • better performance
  • [131088350050] |Btw, you can resize ext4 filesystems without LVM as well ( resize2fs(8) ). [131088350060] |Regarding the performance impact of lvm, some people report decreases by 5 % other a 20 fold degradation when snapshotting is involved, i.e. it depends on the lvm features/layouts you use and your usage pattern. [131088360010] |LVM on top of anything is probably a good idea because it gives you quite a bit of flexibility at pretty marginal cost (the extra abstraction layer is really cheap compared to disk I/O). [131088360020] |That said, I'd use RAID6, as RAID5 leaves you with no redundancy during a rebuild, which is precisely the time of high stress where drives are most likely to fail. [131088370010] |LVM and RAID have some similar functionality (both can do mirroring and stripping) but they serve different purposes. [131088370020] |RAID is designed to make the storage more reliable, faster and bigger. [131088370030] |The differend RAID levels each avchieve one or more of these 3 goals. [131088370040] |For example RAID0 gives you speed and more space while RAID1 provides reliablility and fast read. [131088370050] |RAID5 gives you some reliability at the cost of some write speed RAID6 does this even more. [131088370060] |With 10TB I would consider creating partitions on the disks and adding the partitions to different RAID arrays. [131088370070] |For example you can have swap on RAID0, system files on RAID5, boot partition on RAID1 (so grub can use it), /home on RAID1+0. [131088370080] |LVM is designed to hide what storage you use. [131088370090] |It doesn't matter how many disks or where all you see is a logical volume. [131088370100] |You can easily add/remove physical volumes without the filesystems on the logical volume knowing about it. [131088370110] |Most importantly it gives you snapshots. [131088370120] |Snapshots save lives. [131088370130] |Make one before every upgrade or make a daily snapshot of /home. [131088370140] |Having too many snapshots can greatly reduce write performance on the original LV. [131088370150] |Snapshots are implemented with copy-on-write which causes an extra read and write operation/snapshot. [131088370160] |Even for very small writes a complete block is copied. [131088370170] |See the links in maxschlepzig's answer. [131088370180] |An other advantage is not having to know in advance how large the filesystems will be. [131088370190] |You can create small LVs and grow them as needed. [131088370200] |Use the extra space for snapshots don't just create a 9.9TB /home immediately. [131088370210] |So yes it makes sense to use both. [131088380010] |.Xresources settings in effect [131088380020] |Is there some way to inspect which .Xresources settings are in effect at the moment (unlike xrdb -query)? [131088380030] |For example, I'm on a host which doesn't seem to respect *reverseVideo: true, but I don't know whether that is because I wrote it the wrong way (even *florb: glorb doesn't raise an error when running xrdb -merge $HOME/.Xresources), because the setting is not supported, or some other reason. [131088390010] |Doesn't xrdb -query -all do what you want? [131088390020] |I have some fairly unorthodox settings loaded at X-startup from my .Xresources, and it gives them back to me: [131088400010] |There is a difference as to if resources are loaded into an X11 server and they're loaded by a client. [131088400020] |For instance, you could change the server's resources after launching a client. [131088400030] |To get the current server resources, you can use 'xrdb -query -all'. [131088400040] |For getting the current client resources, I'm not aware of a solution, but editres(1) will allow you to send resources to a compliant client while it is running. [131088400050] |You'll probably have luck with applications that use the Xaw and Motif-era toolkits, but less (or no) luck with GTK and QT applications. [131088400060] |A good example is 'xterm', you can turn the scrollbar on and off via editres without restarting the client. [131088410010] |xrdb -query lists the resources that are explicitly loaded on the X server. [131088410020] |appres lists the resources that an application would receive. [131088410030] |This includes system defaults (typically found in a directories like /usr/X11R6/lib/X11/app-defaults or /etc/X11/app-defaults) as well as the resources explicitly set on the server with xrdb. [131088410040] |You can restrict a particular class and instance, e.g. appres XTerm foo to see what resources apply to an xterm invoked with xterm -name foo. [131088410050] |The X server only stores a list of settings. [131088410060] |It cannot know whether a widget will actually make use of these settings. [131088410070] |Invalid resource names go unnoticed because you are supposed to be able to set resources at a high level in the hierarchy, and they will only apply to the components for which they are relevant and not overridden. [131088410080] |X resource specs obey fairly intricate precedence rules. [131088410090] |If one of your settings doesn't seem to apply, the culprit is sometimes a system default that takes precedence because it's more specific. [131088410100] |Look at the output of appres Class to see if there's a system setting for something.reverseVideo. [131088410110] |If your application is one of the few that support the Editres protocol, you can inspect its resource tree with the editres program. [131088420010] |Video playlists with start and end times [131088420020] |Is there a good GUI-application (for example an mplayer GUI or something like banshee) for linux which allows to make and edit playlists (for video files) with different starting and stopping times for each video on the list? [131088420030] |Added: [131088420040] |At the moment I make manually files which contain something like that: [131088420050] |video.avi -ss 2440 -endpos 210 #some comment [131088420060] |video2.mp4 -ss 112 -endpos 2112 [131088420070] |Then I have a wrapper script for: mplayer -fs $(grep -v "^ #" $1) [131088420080] |Furthermore I have written some emacs functions which simplify the editing of such files a little bit. [131088420090] |(Like converting starting and end time from hh:mm:ss format to seconds and endtime to relative position (endtime - starttime) as required by -endpos (I can post the macros if someone is interested). [131088420100] |However, that's still too uncomfortable. [131088420110] |So my question is if there is a nice GUI for doing this (for example which allows you to mark in a video timeline the start and end times for the playlist and so on). [131088430010] |Maybe I'm getting the question wrong, since English is not my first language, but wouldn't it be better if you edited the video with a tool like Kino instead of making a playlist like that? [131088430020] |You can adjust the starting and stopping times as you want, and I don't think it would be that difficult. [131088430030] |Sry for my english! [131088430040] |Cheers. [131088440010] |I failed to find whether these can really be applied to playlists but you may look into Edit Decision Lists (EDLs). [131088440020] |Here are some links to get you started: [131088440030] |
  • MPlayer manual about EDL support
  • [131088440040] |
  • MPlayer EDL tutorial
  • [131088440050] |
  • Video editing from the command line LinuxGazette article
  • [131088440060] |
  • The sensible cinema project
  • [131088440070] |If you don't mind the small pauses between the videos you could just run mplayer several times from a script with different EDL files each time. [131088440080] |If pauses are a no-no then maybe you should create a new video just like varrtto suggested. [131088450010] |The following will drive Smplayer, which uses mplayer internally.. it is,at least, a normal GUI, but your playlist would need to be in your text editor .. and you obviously know about that method already :) [131088450020] |I tried this a couple of years ago, but I'd forgotten all about it as I don't often need such a thing, but it is good to keep "bookmarks".. [131088450030] |I'm glad you've resurrected the idea.. [131088450040] |Here is the script... which really only does the same as you have been doing, but to Smplayer (an mplayer GUi) [131088460010] |hibernate to disk not restoring, but suspend to ram is working. [131088460020] |I have Debian 6, I have also seen this under Ubuntu (can not remember how I fixed it). [131088460030] |I can hibernate, but when I switch on the system cold boots (it does not restore previous session). [131088460040] |Note suspend works fine. [131088460050] |Have looked in /var/log/pm-suspend.log Shows for each suspend suspend block a resume suspend block, but hibernate hibernate' is not followed byresume hibernate` ( I assume that is what is expected. [131088460060] |Installed package hibernate, as was thinking it may be needed, but made no difference. [131088460070] |I just started looking and can't find /usr/lib/hal/scripts/linux/hal-system-power-hibernate-linux or /usr/lib/hal I searched for power files (is it all there?: [131088470010] |This is a common issue for hibernate and suspend in Ubuntu / Debian. [131088470020] |install uswsusp [131088470030] |then try the following for suspend and hibernate respectively, [131088470040] |if it works, then you can make it permanent, back up the following, [131088470050] |and edit the following, [131088480010] |Dos-based printing from NT to unix/linux. [131088480020] |I need help for the below mentioned scenario: [131088480030] |
  • A Dot-matrix printer is physically connected to the Linux machine ( e.g. Ubuntu-10.04 , it can be any unix/linux flavour)
  • [131088480040] |
  • From this linux machine, when I take a RDP to the Windows NT-4.0 TS, I run the DOS-based application.
  • [131088480050] |
  • Now I want to print few pages from this DOS-based application to the Ubuntu's printer, When I am in RDP-Session.
  • [131088480060] |When I followed the samba-printing documentation, I am able to print from GUI-based apps like Notepad, MS WORD and so forth; but not able to print from the command prompt of Windows. [131088480070] |Any idea how to do this ? [131088480080] |[ WINDOWS MACHINE IS STRICTLY NT-4.0 2000 TS ] [131088490010] |How to catch optioned and non optioned arguments correctly? [131088490020] |I want to write a shell script which will take some arguments with some options and print that arguments. [131088490030] |Suppose the name of that script is abc.ksh. [131088490040] |Usage of that script is - ./abc.ksh -[a |b |c|d] Now I write a shell script which will take options and arguments [131088490050] |Examples of Correct Inputs: [131088490060] |
  • ./abc.ksh -a "sample text" "some_string"
  • [131088490070] |
  • ./abc.ksh "some_string" -a "sample text"
  • [131088490080] |
  • ./abc.ksh -asample\ text some_string
  • [131088490090] |
  • etc...
  • [131088490100] |some_string input is not catch in my script. [131088490110] |How can I catch that? [131088500010] |After capturing your options [131088510010] |It is typical for programs to force the "some_string" part to be the last argument so that .abc.ksh "some_string" -a "sample text" is an error. [131088510020] |If you do this, then after parsing the options, $OPTIND holds the index to the last argument (the "some_string" part). [131088510030] |If that is not acceptable, then you can check at the beginning (before you enter the while to see if there is a non-prefixed argument. [131088510040] |This will let you have "some_string" at the beginning and at the end. [131088510050] |If you needed to have it in the middle, you could either not use getopts or you could have two sets of getopts. [131088510060] |When the first one errors out, it could be due to the non-prefixed argument; get it and start a new getopts to get the remaining args. [131088510070] |Or you can skip getopts all together and roll your own solution. [131088520010] |last time file opened [131088520020] |Is it possible to get the time when file was opened last time and sort all files in a directory by those times? [131088530010] |ls -ltu list all the files, showing and sorting by access time. [131088530020] |From man ls: [131088540010] |This depends on exactly what you mean by "opened", but in general, yes. [131088540020] |There are three timestamps normally recorded: [131088540030] |
  • mtime — updated when the file contents change. [131088540040] |This is the "default" file time in most cases.
  • [131088540050] |
  • ctime — updated when the file or its metadata (owner, permissions) change
  • [131088540060] |
  • atime — updated when the file is read
  • [131088540070] |So, generally, what you want to see is the atime of a file. [131088540080] |You can get that with stat or with ls. [131088540090] |You can use ls -lu to do this, although I prefer to use ls -l --time=atime (which should be supported in almost all modern Linux distributions) because I don't use it often, and when I do I can remember it better. [131088540100] |And to sort by time, add the -t flag to ls. [131088540110] |So there you go. [131088540120] |There is a big caveat, though. [131088540130] |Updating the atime every time a file is read causes a lot of usually-unnecessary IO, slowing everything down. [131088540140] |So, some Linux distributions now default to the noatime filesystem mount option, which basically kills atimes, or else relatime, which only updates atimes if they are older than the current mtime. [131088540150] |(So you can tell if the file has been read at least once since it was modified.) [131088540160] |You can find if these options are active by running the mount command. [131088540170] |Also, note that access times are by inode, not by filename, so if you have hardlinks, reading from one will update all names that refer to the same file. [131088550010] |If your listing is for human consumption, use ls with one of the date sorting flags (-tu for access (read) time, just -t for modification (write) time or -tc for inode change time). [131088550020] |See mattdm's answer for more information (in particular the caveat regarding -a and the definition of -c). [131088550030] |If this is for program consumption, parsing the output of ls is problematic. [131088550040] |If your shell is zsh, you don't need ls anyway: zsh has globbing qualifiers to sort the matches by increasing access (*(Oa)), inode change (*(Oc)) or modification (*(Om)) time. [131088550050] |A lowercase o sorts by increasing age. [131088550060] |Otherwise, if you know that the file names don't contain any newline or non-printable characters (in the current locale), then you can do something like [131088550070] |If you want to invoke a command on many files at once, you need more setup. [131088550080] |Note that act_on_files_by_date $(ls -t) doesn't work just like this, as filenames containing wildcard characters or whitespace would be expanded in the result of the command substitution. [131088550090] |The following code works as long as no filename contains a newline or a non-printable character: [131088550100] |If you want to cope with arbitrary file names, you'll have a very hard time without resorting to more powerful tools than a standard shell: zsh, perl, python… [131088560010] |Ideas for securing OpenVPN on an OpenWrt router [131088560020] |I put an OpenVPN server on port 1194 on an OpenWrt 10.03 router: [131088560030] |What could I do (on the server side) to increase security regarding this OpenVPN server? [131088560040] |Here are some of my ideas: [131088560050] |
  • I sed 's/1194/50000/' the port number to a higher one to make it harder for port scanners to find
  • [131088560060] |
  • iptables? [131088560070] |I should only allow IP ranges [on the input chain] that I will use in reality? +Only allow my laptop's MAC address
  • [131088560080] |
  • If I don't use my router (e.g.: when I'm sleeping) I just turn it off.
  • [131088560090] |Is there anything I'm missing? [131088560100] |From the ps command I can see that OpenVPN is run by root, which is unsafe. [131088560110] |What else should I do to increase security? [131088570010] |OpenVPN is designed to be secure. [131088570020] |It will only allow clients who have the keys signed by you. [131088570030] |The most important thing is keeping the private keys secure. [131088570040] |Always encrypt them on the clients and check the permissions on the key file on the server. [131088570050] |Don't keep the CA private keys on the server it doesn't need them. [131088570060] |Encrypt it put it on a pendrive and protect it. [131088570070] |Port scanners will have no trouble finding the server on any port but they won't be able to use it. [131088570080] |If you know that you will only use it from a limited set of IP addresses then by all means disable everything else with iptables. [131088570090] |However most people tend to use it from varying locations for example with a laptop. [131088570100] |You could automatically ban IPs that try invalid keys but bruteforcing RSA keys like this is infeasible anyway. [131088570110] |If the keys are safe then the biggest risk is some bug in the OpenVPN implementation which makes it vulnerable to attacks. [131088570120] |If that happens an attacker can run arbitrary code with the privileges of the OpenVPN server process. [131088570130] |You can decrease the effect of this kind of attack by not running the server as root. [131088570140] |Add this to your server config: [131088570150] |Your config file seems to use a different syntax then mine but something like this should be supported. [131088570160] |You can try the grsecurity patch for the kernel but I'm not sure it works on embedded systems and it would be really bad if you made it unbootable by accident. [131088570170] |It makes arbitrary code execution bugs harder to exploit. [131088570180] |You can also increase key sizes. [131088570190] |1024 bit keys may become breakable in the near future if not already. [131088570200] |Be sure not to generate them with Debian's OpenSSL. :) [131088570210] |It's my personal opinion that MAC address filtering is absolutely useless. [131088570220] |It's easy to fake and valid ones can be found easily. [131088570230] |Use WPA2 CCMP with a 63 byte long random key and you should be OK. [131088570240] |Don't let people plug in random cables. [131088570250] |I know there are not much resources available in routers but you can try logging. [131088570260] |I'm almost sure there won't be enough space on the router so log to an other host. [131088570270] |Syslog-ng can do this easily I don't know how easy it is to install it on a router. [131088580010] |Obtaining Solaris 10 patches [131088580020] |I have a Solaris 10 server. [131088580030] |I've been trying to find from where I can download patches/updates. [131088580040] |The specific bug I'm running into (http://bugs.opensolaris.org/view_bug.do?bug_id=6551484) is reportedly fixed, according to that report: [131088580050] |So far, all the potential Solaris patch download sites that Google returns all redirect to the My Oracle Support portal. [131088580060] |I signed up for that, but now I need an "Oracle Support Identifier". [131088580070] |According to the FAQ: "After you log in with your SSO account, you must register your Oracle Customer Support Identifiers (Support IDs) or your Sun Contract Number. [131088580080] |Most customers have only one Support ID. [131088580090] |Your Support ID is included in the welcome letter sent from Oracle. [131088580100] |This is a number similar to 3434354 that defines for Oracle the products you have licensed for support. [131088580110] |After you have registered your Support IDs, you must be approved by the Customer User Administrator (CUA) for your organization." [131088580120] |There is no paperwork, I have a SPARC machine with Solaris 10 installed on it. [131088580130] |Am I out of luck regarding getting it updated? [131088590010] |I was actually just now on the phone with Oracle about an unrelated matter, and they've confirmed with me that it's now not possible to download any patches for Solaris outside of the Oracle support system (http://support.oracle.com/). [131088590020] |Sorry to be the bearer of bad news. [131088600010] |Python in Gnome code base [131088600020] |Is Python present in the Gnome code base? [131088600030] |If so, how Python is involved? [131088600040] |Note [131088600050] |My question concerns the core Gnome Desktop Environment only, e.g. nautilus, gnome-session, gnome NetworkManager etc.. and any underneath App or library. [131088600060] |Thanks. [131088610010] |Currently there are 3 modulesets for GNOME 3: [131088610020] |
  • GNOME Core dependencies (e.g GLib, GTK+, D-Bus); In a sense, this can can be called GNOME platform.
  • [131088610030] |
  • GNOME Core (e.g. Nautilus, gnome-power-manager, gnome-session)
  • [131088610040] |
  • GNOME Featured Apps (e.g. Anjuta, Evolution, gedit)
  • [131088610050] |Here's modules that depend on Python, directly or not: [131088610060] |
  • EOG, the image viewer is part of GNOME Core. [131088610070] |For GNOME 2, there is gnome-panel and gnome-applets, both which are to be replaced by 'shinier' GNOME 3 tech.
  • [131088610080] |
  • dbus-python, which provides a Python interface to D-Bus, is part of GNOME Platform.
  • [131088610090] |
  • There's a number of modules in GNOME Featured Apps, which not only depend on Python, but also PyGTK.
  • [131088610100] |I'm likely missing some more. [131088610110] |But the point is that GNOME sees heavy usage of Python. [131088610120] |I think it's the second most used language after C for the above modulesets. [131088610130] |Either Vala or C# is third, not sure. [131088610140] |Maybe I must fire up sloccount on my checkouts :) [131088620010] |Is there a search utility which can binary split search a large sorted (sequential) text file? [131088620020] |Yes, I know it sounds odd. [131088620030] |Sequential and Binary-splits don't mix.. [131088620040] |That is unless the sequence is the byte offset within the file itself... [131088620050] |I've scrambled together a binary split search in bash script, using dd. [131088620060] |It has Search-and-Find times of 3-9 seconds for an 8 GB file.. so it works (but slower than I know it can be)... [131088620070] |I'd really prefer to not have to polish this wheel; It just took my fancy to do it as an exercise in bash (having a project is the best way to learn a language, etc). [131088620080] |I think this would be pretty straight forward in C/++, etc... [131088620090] |I'm curious to see some other examples (particularly bash ones. [131088630010] |All the basic text processing utilities are meant to act as filters, and most are meant to process their input as a stream (i.e. read a little input, process it, write the corresponding output, repeat). dd is a little unusual, both by its syntax and by the options it offers. dd is the only shell interface to lseek, and as you've noticed it's clumsy. [131088630020] |When you reach this point, it's time to switch to a more powerful scripting language such as Perl or Python. [131088640010] |How do I configure SELinux to allow outbound connections from a CGI script? [131088640020] |I'm migrating to a new webserver which has SELinux set up (running Centos 5.5). [131088640030] |I've got it set up so that it can execute CGI scripts with no problem, but some of the older Perl based scripts are failing to connect to remote webservices (RSS feeds and the like). [131088640040] |Running: grep perl /var/log/audit/audit.log gives: [131088640050] |As my crash course in SELinux goes, it looks like it is actively refusing the outbound connection, but how do I configure it to allow for CGI scripts to make outbound requests? [131088650010] |You probably need to enable the httpd_can_network_connect SELinux boolean: [131088650020] |Run as root: [131088660010] |How can I use my server to compile a kernel for my laptop? [131088660020] |My laptop, an HP pavilion with an nVidia card, has some issues with suspending. [131088660030] |Namely firewire breaks on suspend and the nvidia drivers cause Xorg to hang on resume. [131088660040] |I'd like to compile my own kernel to build firewire in instead of as a loadable module, and disable agpart to see if these changes fix these issues... [131088660050] |However, my laptop isn't the fastest nor does it have a ton of RAM, and its fans are on their last legs. [131088660060] |I'd like to configure the kernel build on the laptop buy compile the kernel on our in-house VMware server which has a lot more horsepower. [131088660070] |Both the laptop and the server have Ubuntu on them (Ubuntu desktop on the laptop, Ubuntu... wait for it... [131088660080] |Server on the Server. [131088660090] |Bet you never would have guessed that!) [131088660100] |How can I use one linux system to compile a kernel for the architecture of a different linux system? [131088670010] |The generic approach here is that you want to build a kernel package which you can install. [131088670020] |After all, the kernel you're running on your laptop now was built on some server somewhere. [131088670030] |For Fedora or other Red Hat based distributions, you'd simply download the kernel source rpm (yum-downloadonly --source kernel), unpack that, modify the config to meet your needs, and rebuild under mock with the appropriate parameters for the target system. [131088670040] |For Ubuntu, the actual actions taken are different but the steps is similar. [131088670050] |I haven't ever done this myself, but there's a detailed help document on this here https://help.ubuntu.com/community/Kernel/Compile, and in summary: [131088670060] |
  • Download the kernel source package with sudo apt-get install linux-source
  • [131088670070] |
  • Make your modifications to the config
  • [131088670080] |
  • Build using fakeroot and the debian/rules script that's part of the package
  • [131088670090] |
  • Take the resulting .deb files and there you go.
  • [131088680010] |Can a Gnome Terminal profile use UTF-8 by default? [131088680020] |I am on an Ubuntu (I think) system. [131088680030] |I don't have root, so I can't change the locale. [131088680040] |I want to make my default terminal profile use UTF-8 by default. [131088680050] |There should be a way to do this, either in the .gconf/apps/gnome-terminal/ directory somewhere, or in a environment variable, or something. [131088680060] |However, I can't seem to find it. [131088680070] |Edit with more details: [131088680080] |In a terminal, I have: [131088680090] |When try to more a UTF document in that new terminal, I get: [131088680100] |Which appears on my screen as dots. (the uffds were a cut and paste. [131088680110] |I left the "\noise:bgspeech" in there so you could see that ascii cut and pasted correctly) [131088690010] |I believe that gnome-terminal will Just Work with UTF-8 is enabled in the shell, so all you need to do is enable that. [131088690020] |Put [131088690030] |in ~/.bashrc and there you go. [131088690040] |EDIT: [131088690050] |Okay, so, the answer is currently you can't set this. [131088690060] |Gnome Terminal follows the current environment's LANG setting and uses the encoding for that as the default. [131088690070] |So you need to get LANG to contain UTF-8 before gnome-terminal is launched. [131088690080] |Setting this in ~/.bashrc should do it — you'll just need to log out and log in again. [131088690090] |(Note that it's actually better to put this in ~/.bash_profile so you can override it for subshells, but I'm not sure that bash is necessarily run as a login shell as part of setting up the Gnome environment. [131088690100] |That's worth testing....) [131088700010] |I just checked in menu->terminal->set character encoding it is utf-8 [131088700020] |

    The terminal and bash are not the same thing.

    [131088700030] |I would start by doing cat utf-8-file (cat and bash will pass this file unchanged to the terminal, (well actually to stty, stty will convert newline to carrage return, newline etc.)) if this displays the file properly then gnome-terminal is setup. [131088700040] |(This so far is all I have ever done, as I use utf-8 in english; it was already set up in Ubuntu 10.10 and Debian 6 for me). [131088700050] |So then just to set up bash etc. [131088700060] |

    Re-reading ~/.bashrc

    [131088700070] |If you edit ~/.bashrc you must re-read it . ~/.bashrc (or start a new shell) (dont forget the dot) [131088710010] |How to recover a crashed Linux md RAID5 array? [131088710020] |Some time ago I had a RAID5 system at home. [131088710030] |One of the 4 disks failed but after removing and putting it back it seemed to be OK so I started a resync. [131088710040] |When it finished I realized, to my horror, that 3 out of 4 disks failed. [131088710050] |However I don't belive that's possible. [131088710060] |There are multiple partitions on the disks each part of a different RAID array. [131088710070] |
  • md0 is a RAID1 array comprised of sda1, sdb1, sdc1 and sdd1.
  • [131088710080] |
  • md1 is a RAID5 array comprised of sda2, sdb2, sdc2 and sdd2.
  • [131088710090] |
  • md2 is a RAID0 array comprised of sda3, sdb3, sdc3 and sdd3.
  • [131088710100] |md0 and md2 reports all disks up while md1 reports 3 failed (sdb2, sdc2, sdd2). [131088710110] |It's my uderstanding that when hard drives fail all the partitions should be lost not just the middle ones. [131088710120] |At that point I turned the computer off and unplugged the drives. [131088710130] |Since then I was using that computer with a smaller new disk. [131088710140] |Is there any hope of recovering the data? [131088710150] |Can I somehow convince mdadm that my disks are in fact working? [131088710160] |The only disk that may really have a problem is sdc but that one too is reported up by the other arrays. [131088710170] |Update [131088710180] |I finally got a chance to connect the old disks and boot this machine from SystemRescueCd. [131088710190] |Everything above was written from memory. [131088710200] |Now I have some hard data. [131088710210] |Here is the output of mdadm --examine /dev/sd*2 [131088710220] |It appears that things have changed since the last boot. [131088710230] |If I'm reading this correctly sda2, sdb2 and sdc2 are working and contain synchronized data and sdd2 is spare. [131088710240] |I distinctly remember seeing 3 failed disks but this is good news. [131088710250] |Yet the array still isn't working: [131088710260] |md0 appears to be renamed to md127. md125 and md126 are very strange. [131088710270] |They should be one array not two. [131088710280] |That used to be called md1. md2 is completely gone but that was my swap so I don't care. [131088710290] |I can understand the different names and it doesn't really matter. [131088710300] |But why is an array with 3 "active sync" disks unreadable? [131088710310] |And what's up with sdd2 being in a separate array? [131088710320] |Update [131088710330] |I tried the following after backing up the superblocks: [131088710340] |So far so good. [131088710350] |Since sdd2 is spare I don't want to add it yet. [131088710360] |Apparently I can't do that. [131088710370] |That didn't work either. [131088710380] |Let's try with all the disks. [131088710390] |No luck. [131088710400] |Based on this answer I'm planning to try: [131088710410] |Is it safe? [131088710420] |Update [131088710430] |I publish the superblock parser script I used to make that table in the my comment. [131088710440] |Maybe someone will find it useful. [131088710450] |Thanks for all your help. [131088720010] |First check the disks, try running smart selftest [131088720020] |It might take a few hours to finish, but check each drive's test status every few minutes, i.e. [131088720030] |If the status of a disk reports not completed because of read errors, then this disk should be consider unsafe for md1 reassembly. [131088720040] |After the selftest finish, you can start trying to reassembly your array. [131088720050] |Optionally, if you want to be extra cautious, move the disks to another machine before continuing (just in case of bad ram/controller/etc). [131088720060] |Recently, I had a case exactly like this one. [131088720070] |One drive got failed, I re-added in the array but during rebuild 3 of 4 drives failed altogether. [131088720080] |The contents of /proc/mdadm was the same as yours (maybe not in the same order) [131088720090] |But I was lucky and reassembled the array with this [131088720100] |By looking at the --examine output you provided, I can tell the following scenario happened: sdd2 failed, you removed it and re-added it, So it became a spare drive trying to rebuild. [131088720110] |But while rebuilding sda2 failed and then sdb2 failed. [131088720120] |So the events counter is bigger in sdc2 and sdd2 which are the last active drives in the array (although sdd didn't have the chance to rebuild and so it is the most outdated of all). [131088720130] |Because of the differences in the event counters, --force will be necessary. [131088720140] |So you could also try this [131088720150] |To conclude, I think that if the above command fails, you should try to recreate the array like this: [131088720160] |If you do the --create, the missing part is important, don't try to add a fourth drive in the array, because then construction will begin and you will lose your data. [131088720170] |Creating the array with a missing drive, will not change its contents and you'll have the chance to get a copy elsewhere (raid5 doesn't work the same way as raid1). [131088720180] |If that fails to bring the array up, try this solution (perl script) here Recreating an array [131088720190] |If you finally manage to bring the array up, the filesystem will be unclean and probably corrupted. [131088720200] |If one disk fails during rebuild, it is expected that the array will stop and freeze not doing any writes to the other disks. [131088720210] |In this case two disks failed, maybe the system was performing write requests that wasn't able to complete, so there is some small chance you lost some data, but also a chance that you will never notice it :-) [131088720220] |edit: some clarification added. [131088730010] |I experienced many problems while I use mdadm, but never lose data. [131088730020] |You should avoid the --force option, or use very carefully, becasuse you can lose all of your data. [131088730030] |Please post your /etc/mdadm/mdadm.conf [131088740010] |Is locking the screen safe? [131088740020] |See USB driver bug exposed as "Linux plug&pwn", or this link [131088740030] |Two choices [GNOME, Fedora 14]: [131088740040] |
  • Use the gnome-screensaver
  • [131088740050] |
  • Use the "switch user" function [gnome menu -> log out -> switch user]
  • [131088740060] |So the question is: which one is the safer method to lock the screen if a user leaves the pc? [131088740070] |Is it true that using the [2] method is safer? [131088740080] |The way I see it, the gnome-screensaver is just a "process", it could be killed. [131088740090] |But if you use the log out/switch user function, it's "something else". [131088740100] |Using the "switch user" function, could there be a problem like with the gnome-screensaver? [131088740110] |Could someone "kill a process" and presto...the lock is removed? [131088740120] |Could the GDM [??] "login windows process" get killed and the "lock" gets owned? [131088740130] |If the [2] method is safer, then how can i put an icon on the GNOME panel, to launch the "switch user" action by 1 click? [131088750010] |Well your first link is about kernel mode arbitraty code execution there is not much you can do against that. [131088750020] |Logging out won't help. [131088750030] |Grsecurity and PaX could prevent this but I'm not sure. [131088750040] |It surely protect against introducing new executable code but I can't find any evidence that it randomizes where the kernel code is located which means an exploit could use the code already in executable memory to perform arbitrary operations (a method known as return-oriented-programming). [131088750050] |Since this overflow happens on the heap compiling the kernel with -fstack-protector-all won't help. [131088750060] |Keeping the kernel up to date and people with pendrives away seems to be your best bet. [131088750070] |The second method is the result of a badly written screensaver which means logging out prevents that particular bug. [131088750080] |Even if the attacker kills GDM he will not get in. Try killing it yourself from SSH. [131088750090] |You get a black screen or a text-mode console. [131088750100] |Besides AFAIK GDM runs as root (like login) so the attacker would need root privileges to kill it. [131088750110] |Switching users don't have this effect. [131088750120] |When you switch user the screen is locked with the screensaver and GDM is started on the next virtual terminal. [131088750130] |You can press [ctrl]+[alt]+[f7] to get back to the buggy screensaver. [131088760010] |Install grub on hard disk used in another system [131088760020] |So I have a 512MB flash chip used in an embedded system with the following partition table: [131088760030] |I'm using buildroot on my ubuntu (development) box to compile the 200MB ext2 image for the "normal" parition. [131088760040] |At this point on my dev box I dd the image created from buildroot to the flash chip (plugged in with an ide to usb connector on /dev/sdd): [131088760050] |OK, fine this works and I can mount /dev/sdd3 and see the entire filesystem that the embedded device will use. [131088760060] |Now, I want to install grub on this flash chip and am lost on how to do this. [131088760070] |I've tried: [131088760080] |But when I plug the flash chip into my embedded device and turn it on, grub won't load (just sits at a black screen with blinking cursor--no error). [131088770010] |You need to inform Grub that your disk will be the primary hard disk in the new system, and to let it know where to find the part of the bootloader that doesn't fit in the boot sector. [131088770020] |Grub calls the correspondence between the boot-time disk designations and the disk designations in the running operating system as the device map. [131088770030] |I think you'll get a working bootloader if you edit /media/sdd3/boot/grub/device.map to contain [131088770040] |then run grub-install --root-directory=/media/sdd3/boot/grub/device.map. [131088780010] |With a lot of searching and a lot of guess and check I came upon the solution to my problem: [131088780020] |First dd rootfs image buildroot creates: [131088780030] |Then, copy /boot from sdd3 to sdd1, create a menu.lst file, and copy over bzImage. [131088780040] |Finally, run grub: [131088780050] |Plug the drive into the system and everything loads. [131088790010] |Why can't I use arrow keys in terminal on Debian 6? (nonroot) [131088790020] |When i am on a nonroot user i cannot use up/down to list my previous commands and if i am typing i cant use left/right to say... add in a directory or to correct spelling. [131088790030] |Worse is i cant use tab. [131088790040] |I could write /var/www/mylongsitena and pressing tab will not autocomplete it. [131088790050] |Its EXTREMELY annoying. [131088790060] |How can i change this? [131088790070] |IIRC debian etch and lenny didnt do this. [131088790080] |How can i unto this change? [131088800010] |You appear to have /bin/sh as the login shell of your non-root user, and /bin/sh point to dash. [131088800020] |Dash is a shell designed to execute shell scripts that stick to standard constructs with low resource consumption. [131088800030] |The alternative is bash, which has more programming features and interactive features such as command line history and completion, at the cost of using more memory and being slightly slower. [131088800040] |Change your login shell to a good interactive shell. [131088800050] |On the command line, run [131088800060] |(You can use /bin/bash if you prefer.) [131088800070] |Configure your user management program to use that different shell as the default login shell for new users (by default, the usual command-line program adduser uses /bin/bash). [131088810010] |What directories/file permissions should i ensure are set? [131088810020] |I have a similar question about default software configuration. [131088810030] |For this question i would like to ask [131088810040] |What directories/file permissions should i ensure are set? [131088810050] |Aprrently Is it normal to get hundreds of break-in attempts per day. [131088810060] |So i checked what files and folders are writable on a nonroot user. [131088810070] |It was all good. [131088810080] |Now i need to protect passwords and such so i checked read permissions. [131088810090] |I am kind of horrified. [131088810100] |By default my linux distro has /root to be read. [131088810110] |Where i set my mysql root password. [131088810120] |But then i looked on and saw /etc was readable. [131088810130] |Any user could have got into /etc/ssmtp/ssmtp.conf and found the login/password i use for my mail (which cron uses to contact me) and potentially use it to spam everyone and or get my server or domain on blacklist. [131088810140] |I set /etc to 750. [131088810150] |Will there be any problems with that? [131088810160] |I'm sure other vulnerabilities due to read access. [131088810170] |What files/directories should i ensure are not read or writable? [131088810180] |-edit- ok so i changed etc back to 755. [131088810190] |But stil, i need to ensure certain folders are not readable. [131088810200] |I changed apache and ssmtp. [131088810210] |I would like to know others. [131088820010] |Any file that contains a password or passphrase should be readable only by the user(s) (and group(s) if applicable) that need to access this password. [131088820020] |The same goes for files containing any other kind of confidential information. [131088820030] |Most files in /etc need to be world-readable: they are either general system configuration files such as /etc/fstab and /etc/passwd, or configuration files of a specific application. [131088820040] |The few exceptions are files like /etc/shadow (user passwords), /etc/sudoers (users having special permissions), /etc/ppp/chap-secrets (PPP passwords), or /etc/ssl/private (directory containing private SSL certificates), and they normally come out-of-the-box with proper permissions. [131088820050] |The technical term for making /etc non-world-readable is shooting yourself in the foot. [131088820060] |Don't do it, it hurts. [131088820070] |It's rare to have a mail password in the system configuration. [131088820080] |Usually, mail uses passwords to authenticate users, not systems, so that password would be somewhere in your home directory. [131088820090] |When you use unusual features, you do need to check that you are using them securely (in this case, I suspect you're not using the right tool for the job). [131088830010] |I know it's basically the same answer I gave to your other question, but in addition to what Gilles has said, you should use an automated security audit tool like tiger. [131088830020] |If your distro ships a setting by default, and tiger doesn't warn about it, then the setting should be fine. [131088830030] |For files and folders you create yourself, you can and should use your own discretion. [131088830040] |In my experience and opinion, there are really only a few files and folders that shouldn't be world-readable, and there's almost never a good reason to have anything world-writeable. [131088840010] |Is there any chance that some of these processes are malicious? [131088840020] |My significant other and I are sometimes paranoid that a foreign government usually associated with last year's Gmail security breach may be attempting to gain access to our computers. [131088840030] |There were a lot of strange processes running as root tonight, though I have not opened any root terminals. [131088840040] |Can someone please look at these and tell me, yes or no, are they suspicious, and why or why not? [131088850010] |That's a long list, surely you don't expect us to review it line by line? [131088850020] |It's normal to have many processes running as root: unix systems often have one process to do each job, so many system services get their own process. [131088850030] |In fact, some of these (e.g. all the /0 or / (the number identifies a CPU), and most of the ones beginning with k) are kernel threads. [131088850040] |If you're worried about someone gaining control of your machine, ps is not a useful tool. [131088850050] |Any halfway decent¹ rootkit contains code to hide any malicious process from process listings. [131088850060] |Even if the malicious code was not running as root and so couldn't change the kernel reports, it would disguise itself as something innocuous like sh. [131088850070] |¹ Yes, “decent” may not be the right word here. [131088860010] |The idea of assessing whether a process is malicious based on its name is an idea that has been outdated for at least ... well, very long ;) [131088860020] |False flag operations, anyone? [131088860030] |
  • an infector could well append/insert malicious code into any binary
  • [131088860040] |
  • a malicious binary could very well pose under the same name as something you'd normally consider harmless and your list gives no idea about the location of the binaries in the file system or their file bits. [131088860050] |For example a setuid binary owned by root corresponding to one of these processes should at least be checked ...
  • [131088860060] |
  • a rootkit usually attempts to hide, so it would not even appear in the list
  • [131088860070] |And that list is not exhaustive. [131088860080] |Also, a system that is running malicious code with super user rights has no (technical) problem to lie to you. [131088860090] |At the very least an offline analysis would be required. [131088860100] |If you use a package manager, you could compare the binaries at their expected locations against the hashes in the (signed) packages. [131088860110] |Overall that should leave only a tiny subset of binaries actual binaries for you to inspect, besides the numerous scripts. [131088860120] |But even for scripts, those coming in packages will have an accompanying hash against which you can check the binary. [131088870010] |The sole rootkit I've ever encountered in the wild (under Solaris 8 a long time ago) did run a password sniffer, as an "lpsched" process. [131088870020] |The problem was that it ran two of them (bug in the rootkit) and ran them out of a directory that "man lpsched" said wasn't where lpsched lived. [131088870030] |Also, "ps" had been trojaned to not show the extra weird lpsched processes, but top showed them. [131088870040] |If you're really concerned, look at all the PIDs in /proc. [131088870050] |Look at what /proc/$PID/exe links to, to see where the executable really lives. [131088870060] |Double check that against where the executable ought to live. [131088870070] |Try "ls" on all the directories you find that way to see if "ls" shows them all. "ls" not showing a directory is a dead giveaway that something's wrong. [131088870080] |If any specific process seems suspicious, get chkrootkit (http://www.chkrootkit.org/) and rootkit hunter (http://www.chkrootkit.org/) and try them to see if they find anything. [131088870090] |You have to be aware that some rootkits float around in the wild, but have never gotten incorporated in those rootkit hunters. [131088880010] |Reinstall Fedora, keep files? [131088880020] |Is it possible to reinstall Fedora (I have the DVD that I used to install it yesterday), and keep the files in my home directory? [131088880030] |I seem to have messed up my system while trying to get my monitor resolution to work correctly: Installed Fedora in dual boot Windows desktop. [131088880040] |Now I can't get full monitor resolution. [131088880050] |The step that caused my problem was yum --enablerepo=rawhide upgrade kernel xorg-x11-drv-ati xorg-x11-drv-ati-firmware, so I'm looking to either figure out how to get fedora to boot, or just reinstall fedora, but keep the files I've set up so far. [131088890010] |Yeah sorry about that. :) [131088890020] |It is possible, but is only easy if you made /home be a separate partition. [131088890030] |Despite my best efforts, this isn't the default. [131088890040] |You don't have a lot of files yet, though, do you? [131088890050] |I think the best bet is to boot into single user mode and copy the contents to a USB memory stick. [131088890060] |That should be easy. [131088890070] |You'll need to mount it manually -- plug it in, wait a few seconds, and then type dmesg and note the device that it says was inserted. [131088890080] |Then, mount that with: [131088890090] |replacing sdc with whatever dmesg said. [131088890100] |(You may need sdc1, depending on how the device was formated). [131088890110] |Then, change to the root directory (cd /) and run [131088890120] |tar cJvf /mnt/mattdm-is-sorry.tar.xz /home [131088890130] |and when that completes, run [131088890140] |sync; sleep 3; umount /mnt [131088890150] |(The sleep is for superstition.) [131088890160] |The reason for tar rather than just copying is to preserve the Unix metadata, because the USB drive will be FAT formatted, and we don't want to mess with that right now. [131088890170] |Then, once you have your system repaired (I still recommend the F15 alpha!), you can extract it with tar xf /mnt/mattdm-is-sorry.tar.xz. [131088890180] |If you do that in / as root, it'll overwrite everything in your new /home, so probably the best thing to do is boot the new system into single user mode and do that first thing. [131088890190] |Oh, and this time, while you're installing, make /home its own partition. :) [131088900010] |How can I configure SELinux when the semanage command isn't found? [131088900020] |I'm having a very hard time configuring SELinux to allow Sending mail. [131088900030] |Looking into selinux documentation I've found I can manage ports via the semanage command, but the command can't be found. [131088900040] |Is there another way to manage ports using SELinux, or a way for me to find this command? [131088900050] |Worst case: Is there a way to disable selinux, or switch to permissive mode without rebooting? [131088900060] |I'm running Fedora. [131088900070] |Thanks! [131088910010] |semanage is installed at /usr/sbin/semanage on my system — maybe that's just not in your path. [131088910020] |It's part of the policycoreutils package, which is part of the default install but may be missing (yum -y install policycoreutils if it is). [131088910030] |The "big switch" approach is setenforce Permissive as root. [131088910040] |(And setenforce Enforcing to put it back.) [131088910050] |What exactly are you trying to do? [131088910060] |One approach is to find the audit log messages from your blocked action, and use audit2allow to generate a policy module. [131088910070] |But, there may be a setting in the default Fedora policy which will enable what you want. [131088910080] |Run getsebool -a to see a list, and use setsebool to change it. [131088920010] |partitions problem with Debian Squeeze and Windows 7 (Partition 1 does not end on cylinder boundary). [131088920020] |I have just installed Debian Squeeze in a hard disk where also Windows 7 is installed. [131088920030] |Now, if I run cfdisk, I get the following message: [131088920040] |Is there a way to fix it without reinstalling everything? [131088920050] |EDIT: Output from fdisk -l: [131088930010] |How do I disable remote root login via ssh? [131088930020] |How do i disable remote root login via ssh? [131088930030] |I want to log into my server (i use keys on my main comp) then su into root instead of access root directly. [131088930040] |I am using debian. [131088930050] |I follow guides online which say add PermitRootLogin no to the file and anther mention Protocol 2. [131088930060] |Then reset ssh. /etc/init.d/ssh restart. [131088930070] |I did this and it did not work. [131088930080] |I was able to log into root using putty [131088930090] |How do i disbale remote root login on debian? [131088940010] |I'm going to take a guess on this one, but I'm pretty confident. [131088940020] |I bet there's a PermitRootLogin yes line already in your file. [131088940030] |SSH will only use the first line it finds, and will ignore a duplicate further down. [131088940040] |So if you just added PermitRootLogin no to the end of the file without the line above, there will be no effect. [131088950010] |One of the peculiarities of ssh is that PAM-based authentication can't be fully controlled by it directly. [131088950020] |You should check the PAM stack /etc/pam.d/sshd; I would add pam_access to the auth section (see pam_access(8) and access.conf(5) manual pages). [131088950030] |That said, PermitRootLogin No should work regardless. (PermitRootLogin without-password is the screw case.) [131088960010] |How do I create a separate partition for my /home directory? [131088960020] |I was reading http://docs.fedoraproject.org/en-US/Fedora/14/html/Installation_Guide/s1-diskpartitioning-x86.html, but it's not clear to me what this means. [131088960030] |What's an LVM Volume Group versus a Hard Drive? [131088960040] |I want to make sure that my home directory is on it's own partition so that I can more easily reinstall and upgrade the os. [131088960050] |EDIT: If /home is on its own logical volume, will I be able to easily reinstall the os, or is a logical volume a different type of entity from a partition? [131088970010] |An LVM volume group is an abstraction of a hard drive, or multiple hard drives, or multiple RAIDs, or... [131088970020] |It's really a separate question, so I don't think it's pertinent to get more detailed than that here. [131088970030] |The point is, both LVM groups and hard drives can contain partitions. [131088970040] |Which way you go is a matter orthogonal to your main question. [131088970050] |The easiest way to make /home a separate partition is to do it during OS installation, with a complete hard drive repartition and format pass. [131088970060] |You can change your mind and make /home separate later, but it's more work. [131088970070] |The way you create a separate /home partition differs depending on the particular OS installer, but these days, you typically have to tell it you want to do an "advanced" hard drive setup, overriding its simple defaults. [131088970080] |You can then choose to reserve some amount of hard drive space for /home and leave the rest of the disk (or LVM group, or RAID, or...) to the rest of the system. [131088970090] |To make /home a separate partition after you've installed the OS, you either have to repartition or add another volume. [131088970100] |Just as a simple example, you could insert a USB stick and put /home on it like this: [131088970110] |What we've done so far is reformat the USB stick with a fresh ext3 filesystem, then copied the entire contents of /home over to it while preserving all permissions, timestamps, etc. [131088970120] |Then we've laid the new /home copy over the top of the old for testing. [131088970130] |Once you're satisfied that it works, you could unmount /dev/sdc1, nuke the old /home and remount the new one. [131088970140] |Beware, this is dangerous. [131088970150] |I am presenting it as an example, not a recommendation. [131088970160] |Also dangerous is repartitioning the drive after it's already been formatted. [131088970170] |You'd have to do that if you wanted to move /home to a new partition without adding another volume to the machine. [131088970180] |The gparted tool can do this, but it's not without risks. [131088970190] |Having opened up space for a new partition and created it with gparted, you could do something much like I show above to move the contents of the old /home directory to the new partition. [131088970200] |You should also beware that making /home separate has its own problems. [131088970210] |One is, it forces you to set aside a slice of your disk for /home and then live with it. [131088970220] |It's easy to get too clever with partitioning; you could end up with like 10 partitions, 8 of which are full and 2 which have less than 10% usage, and no easy way of reassigning space from the empty ones to the full ones. [131088970230] |LVM and gparted each provide some solutions to this, but the important point to keep in mind is, be very sure you need the extra partitions. [131088970240] |The more moving parts, the more things there are to break. [131088980010] |Simple answer. [131088980020] |First put LVM to the side. [131088980030] |To make a separate /home make a partition on your hard drive, using either fdisk or gparted a gui wrapper for fdisk. [131088980040] |The linux method of accessing partitions is straightforward. /dev/hda, /dev/hdb ... /dev/hd[a letter] are ide drives. /dev/sda,/dev/sdb, ... /dev/sd[a letter] are scsi/sata/usb drives. [131088980050] |The last letter describes the order they appear. [131088980060] |1st sata /dev/sda 2nd sata /dev/sdb ... [131088980070] |When you use fdisk or gparted to partition the drives, individual partitions are accessed by numbers ... /dev/sda1, /dev/sda2 ... . [131088980080] |Format your partition [131088980090] |So now edit /etc/fstab and enter something like /dev/sd?? /home 0 0 rw. [131088980100] |I'm sorry I can't be more specific, my primary hd just died :( and I have no examples. ( On the plus side I was replacing it next week anyway, on the minus side couldn't it have held out another week? ) But between man and google you should get it done. [131088980110] |Also you shouldn't change the boundaries of any partitions with data on them by using fdisk. [131088980120] |If you are familiar with Partition Magic, gparted is a similar tool and can be used to resize and move partitions safely. [131088980130] |There is also a liveCD called Parted magic which will allow you to resize partitions. [131088980140] |It has been standard UNIX practice to keep as many partitions as you feasibly could. [131088980150] |Typically /,/home,/tmp,/root, /var are seperately partitions. [131088980160] |On Linux I add /boot ( since in the old days /boot had to be in the ext family). [131088980170] |There are several reasons for this: it made backups easier ( just backup all of sopme of the partitions ), if a partition became corrupted it kept the damage local, it kept the fragmentations of partitions to a low. [131088980180] |With more modern filesystems, I am not sure how relevant these reasons are anymore. [131088980190] |let me as an example assume that some program keeps some of it's temporary data in /tmp and it has a bug which causes it to just fill /tmp. [131088980200] |This will cause programs to act flakey, but probably not crash the system and give you a cance to repair the damage. [131088980210] |If your whole system were on one partition, it could cause a lot more problems. [131088980220] |Maybe even force you to try and repair the damage from a liveCD. [131088980230] |As for LVM. [131088980240] |LVM basically creates a partition and then allows you to make some subpartitions from it. [131088980250] |If you need more space then if there is more free space you can use it to grow your subpartitions. [131088980260] |If you run out of space you can add a second partition to the volume to make more available. [131088980270] |My one experience indicated it caused more problems then it's worth and chewed up more CPU then I liked, but YMMV. [131088980280] |Also if you have a problem and have to use a liveCD, you have to be careful to pick one which can access your LVM partitions. [131088990010] |Recommended way to login to root? [131088990020] |After i found out hundreds of hacking attempts are made in a day i decided i want to disable logging in to root remotely via a password. [131088990030] |But then i found i cant switch users in winscp so i cant easily upload/download/edit files. [131088990040] |After i figured out how to disable remote logins it appeared i cant login remotely via key. [131088990050] |So i allowed it and disabled password. [131088990060] |But it turns out it disables passwords on all users. [131088990070] |How do you recommend protecting myself when logging in to my server? someone mention he uses one time login tokens but i am unsure how to use that. [131088990080] |But anyways, how do i disable password login for root but allow it for everyone else? i must be able to su into root from the other accounts bc primary i log into acidzombie24 then su to root and carry on normally. [131089000010] |to disable the root password, you need to lock the root account: [131089000020] |explanation: [131089000030] |the first line locks the root a/c, but also expires the password, meaning stuff might break (like the root crontab, for example!!). [131089000040] |The second line sets the root password to "never expire", so that this doesn't happen, and so that you can still use sudo su for example. [131089000050] |Combine this with PermitRootLogin No in /etc/ssh/sshd_config, and you should be good to go. [131089010010] |I am not sure at the time of writing this about making su work, but see this article from IBM Developerworks for good information on securing SSH. [131089020010] |Trouble with grep -o regex [131089020020] |I'm trying to use a grep command with the --only-matching flag, but it's not behaving as I would expect it to. [131089020030] |This command: [131089020040] |echo "1/2/3/4/5" | grep -oE "^([^/]+/){0,2}" [131089020050] |Gives this output: [131089020060] |I was expecting just 1/2/ [131089020070] |What's going on..? 3/4/ shouldn't match "^([^/]+/){0,2}" because it's not at the beginning of the line.. [131089020080] |(running GNU grep 2.5.1) [131089030010] |It was a bug in versions of GNU Grep earlier than this commit (i.e. earlier than GNU version 2.5.3). [131089030020] |Quoting the relevant part of the changelog: [131089030030] |The initial commit that described the bug also added a test for it: [131089040010] |Loop a list through awk [131089040020] |I'm sorry if this is extremely elementary, but I just can't figure out how to do this, and my research has failed me as well. [131089040030] |I have two files: data.csv and list.txt. [131089040040] |Here's an example of what they look like [131089040050] |data.csv: [131089040060] |list.txt: [131089040070] |Now, I am trying to figure out how I can make a loop [131089040080] |where the command runs for each line of list.txt, replacing **LIST ITEM**. [131089040090] |How can this be accomplished? [131089040100] |I'm running this though Terminal on MacOSX 10.5.7. [131089040110] |EDIT: [131089040120] |The desired output for the above example would be [131089040130] |EDIT2: [131089040140] |To be more clear, I am trying to avoid doing this: [131089040150] |And instead, run it in one command, somehow looping through all the lines of list.txt. [131089050010] |I'm not entirely clear on what you're trying to do: replace LIST ITEM with what? [131089050020] |Just looking for a match anywhere and outputting the first field? [131089050030] |Also, your example list.txt appears to match anywhere in the line, which could potentially be problematic: what if list.txt at some point contains the line e? [131089050040] |That would match all but the last line of your sample data.csv. [131089050050] |This is a bit more complex than it could be; your field separator doesn't deal with the optional leading quotation mark on the first field or the optional trailing one on the last field. [131089050060] |Mine does, but at the price that if it's there the first field will be empty (the empty string before ^"?). [131089050070] |It also doesn't try to deal with embedded quotes. [131089050080] |A dedicated CSV parser would be a better idea if you need to support random generalized CSV. [131089060010] |

    Proof of Concept

    [131089060020] |This field separator deals with embedded quotes and/or commas [131089070010] |This meets the order of your desired output: [131089070020] |This reads the data.csv file into memory, mapping the whole line to field1. [131089070030] |Then, each line of the list.txt file is checked against each element of the field1 array. [131089070040] |If the data file is much larger than the list file, then it would make more sense to hold the smaller file in memory and loop over the larger file a line at a time: [131089080010] |Why isn't a straightforward 1/5/15 minute moving average used in Linux load calculation? [131089080020] |Hello [131089080030] |Until recently i thought the load average (as shown for example in top) was a moving average on the n last values of the number of process in state "runnable" or "running". [131089080040] |And n would have been defined by the "length" of the moving average: since the algorithm to compute load average seems to trigger every 5 sec, n would have been 12 for the 1min load average, 12x5 for the 5 min load average and 12x15 for the 15 min load average. [131089080050] |But then i read this article: http://www.linuxjournal.com/article/9001. [131089080060] |The article is quite old but the same algorithm is implemented today in the linux kernel. [131089080070] |The load average is not a moving average but .. an algorithm for which i don't know a name. [131089080080] |Anyway i made a comparison between the linux kernel algorithm and a moving average for an imaginary periodic load: [131089080090] |. [131089080100] |There is a huge difference. [131089080110] |Finally my questions are: [131089080120] |
  • Why this implementation have been choosen compared to a true moving average, that has a real meaning to anyone ?
  • [131089080130] |
  • Why everybody speaks about "1min load average" since much more than the last minute is taken into account by the algorithm. (mathematically, all the measure since the boot; in practice, taking into accound the round-off error.. still a lot of measures)
  • [131089080140] |Thank you for your time [131089090010] |This difference dates back to the original Berkeley Unix, and stems from the fact that the kernel can't actually keep a rolling average; it would need to retain a large number of past readings in order to do so, and especially in the old days there just wasn't memory to spare for it. [131089090020] |The algorithm used instead has the advantage that all the kernel needs to keep is the result of the previous calculation. [131089090030] |Keep in mind the algorithm was a bit closer to the truth back when computer speeds and corresponding clock cycles were measured in tens of MHz instead of GHz; there's a lot more time for discrepancies to creep in these days. [131089100010] |Not able to ssh in to remote machine using shell script in Crontab [131089100020] |Below is the script which i am trying to run, which runs without any issue [131089100030] |But once I add it to crontab, it doesn't give me the user. [131089100040] |Please give your thoughts..... [131089100050] |may be cron demon is running, so we need to include some binaries...? [131089110010] |Who types the password? [131089110020] |The cron job can't get at your ssh-agent, so public key won't work. [131089110030] |You need to supply ssh with a key file explicitly (see the -i option), since it can't query an agent; and that key must have an empty passphrase. [131089120010] |You can make ssh connections within a cron session. [131089120020] |What you need is to setup a public key authentication to have passwordless access. [131089120030] |For this to work, you need to have PubkeyAuthentication yes in each remote server's sshd_config. [131089120040] |You can create a private/public key pair with or without a passphrase. [131089120050] |If you use a passphrase (recommented) you need to also start ssh-agent. [131089120060] |Without a passphrase, you only need to add the parameter -i your_identity_file to ssh command line. ssh will use $HOME/.ssh/id_rsa as default. [131089120070] |I replicated your example by using a key pair with a passphrase. [131089120080] |Here's how I did it. [131089120090] |1) Created the key pair with passphrase. [131089120100] |Saved the private key as ~/.ssh/id_rsa_test, which should have the correct permissions by default. [131089120110] |We can enter an empty passphrase for not using one. [131089120120] |2) Sent the public key to the servers, did the same for all of them. [131089120130] |Remember they need to have PubkeyAuthentication enabled. [131089120140] |3) Run ssh-agent as service with -s. [131089120150] |This will not kill it if you logout. [131089120160] |Its output is a valid shell script, setting the environment so ssh client will know how to connect to it. [131089120170] |We save that to a file (only the first line is really needed). [131089120180] |4) Loaded the above to our current environment so we can use ssh-add to add our private key to ssh-agent. the passphrase from above. [131089120190] |5) Verified it is added. [131089120200] |6) The script I used, slightly modified than yours. [131089120210] |Notice that I did not enclose the ssh command in parentheses and not using backticks rather $(), which is a better alternative for command substitution (this is bash compatible, you didn't mention which shell you're using). [131089120220] |I used the exact same ssh command as yours. [131089120230] |7) My crontab (note that my sh is actually bash) [131089120240] |8) The output [131089120250] |The only problem with using a passphrase is that you need to enter it manually at least one time. [131089120260] |So, the above will not automatically work after a reboot.