[131018170010] |linux static compilation issue [131018170020] |I am building testdisk as static and when i run [131018170030] |make static [131018170040] |i am getting the following error [131018170050] |/usr/bin/ld: cannot find -luuid [131018170060] |collect2: ld returned 1 exit status [131018170070] |what's the problem? [131018170080] |in the makefile i have the following line [131018170090] |LIBS = -lz -lntfs -luuid -lcrypto -lext2fs -lcom_err [131018170100] |and i am getting error on all the floowing flags [131018170110] |-luuid -lcrypto -lext2fs -lcom_err [131018180010] |The RPM packages for libuuid-devel for Fedora 13 appear to contain only the shared library. [131018180020] |Therefore you'd have to build it from source if you need a static library. [131018180030] |I expect that this is the same problem with a static -lcrypto and the others. [131018180040] |However, if it is TestDisk specifically that you are trying to compile, you probably shouldn't bother as the partition repair utility is part of most LiveCD distributions, including Fedora. [131018190010] |How do I find where a port is? [131018190020] |If I want to find out which directory under /usr/ports contains a port like "gnome-terminal", how can I do that? [131018190030] |Is there an easy command? [131018190040] |At the moment I use things like [131018190050] |echo */gnome-terminal [131018190060] |but is there a website or guide which tells you this without having to use a trick? [131018200010] |There are several ways that you can find a port, including your echo technique. [131018200020] |To start, there is the ports site where you can search by name or get a full list of available ports. [131018200030] |You can also try: [131018200040] |but this--like using echo--won't work unless you type the exact name of the port. [131018200050] |For example, gnome-terminal works fine but postgres returns nothing. [131018200060] |Another way is: [131018200070] |but keep in mind that this won't return a nice list; it returns several \n delimited fields, so grep as necessary. [131018200080] |I've used both of the above methods in the past but nowadays I just use find: [131018200090] |Lastly, I will refer you to the Finding Your Application section of the handbook which lists these methods along with sites like Fresh Ports which is handy for tracking updates. [131018210010] |alternatives to wine [131018210020] |Is there some alternative to Wine ( the windows "emulator" ) which is more stable and more likely to work with my Windows games? [131018220010] |Unfortunately Wine is about the best bet for most Windows games, though you don't actually state what sort of games you are wanting to play. [131018220020] |For non-3D games then virtualisation sometimes works well, but of course that still requires a Widnows license in order to run the OS in a VM. [131018220030] |For really old games then DosBox is the thing to try, even in the Windows world. [131018230010] |As David said, wine is about the only way to have Windows stuff running. [131018230020] |However when it comes to games there is Crossover Games from Codeweavers (they use wine and tweak it) and Cedega from Transgamings (they operate on a wine fork, imho). [131018230030] |Be aware that both are commercial products, though. [131018230040] |As I am unsure what the policy towards commercial products is I do not provide any links. [131018230050] |Both products should be fairly easy to find on any search engine online. [131018240010] |Like mentioned before, Wine is the most advanced compatibility layer (W ine I s N ot an E mulator ;-)) you will find. [131018240020] |If you are not happy with it, there are two other projects based on wine but with some tweaks for the support of actual games like Half Life 2, World of Warcraft, etc: [131018240030] |
  • Codeweavers CrossOver Games - http://www.codeweavers.com/products/cxgames/
  • [131018240040] |
  • TransGamings Cedega - http://www.cedega.com/
  • [131018240050] |Maybe this will satisfy your needs. [131018250010] |I advice VirtualBox, it is by far the most reliable Windows virtualization I've experienced. [131018250020] |There's one catch, though. [131018250030] |VirtualBox uses the WINE Direct3D wrapper for 3D acceleration done with Direct3D. [131018250040] |So it might actually be of no improvement on the hardware accelerated side... [131018260010] |How to compile and install programs from source [131018260020] |This is an issue that really limits my enjoyment of Linux. [131018260030] |If the application isn't on a repository or if it doesn't have an installer script, then I really struggle where and how to install an application from source. [131018260040] |Comparatively to Windows, it's easy. [131018260050] |You're (pretty much) required to use an installer application that does all of the work in a Wizard. [131018260060] |With Linux... not so much. [131018260070] |So, do you have any tips or instructions on this or are there any websites that explicitly explain how, why and where to install Linux programs from source? [131018270010] |Normally, the project will have a website with instructions for how to build and install it. [131018270020] |Google for that first. [131018270030] |For the most part you will do either: [131018270040] |
  • Download a tarball (tar.gz or tar.bz2 file), which is a release of a specific version of the source code
  • [131018270050] |
  • Extract the tarball with a command like tar zxvf myapp.tar.gz for a gzipped tarball or tar jxvf myapp.tar.bz2 for a bzipped tarball
  • [131018270060] |
  • cd into the directory created above
  • [131018270070] |
  • run ./configure &&make &&sudo make install
  • [131018270080] |Or: [131018270090] |
  • Use git or svn or whatever to pull the latest source code from their official source repository
  • [131018270100] |
  • cd into the directory created above
  • [131018270110] |
  • run ./autogen.sh &&make &&sudo make install
  • [131018270120] |Both configure and autogen.sh will accept a --prefix argument to specify where the software is installed. [131018270130] |I recommend checking out http://unix.stackexchange.com/questions/30/where-should-i-put-software-i-compile-myself for advice on the best place to install custom-built software. [131018280010] |Just wanna add that there are package managers which compiles packages from source, and handles all package dependencies and flags and etc.. [131018280020] |in BSD systems it's ports: Using the Ports Collection [131018280030] |in Debian apt-get package manager can install from source too: APT HOWTO: Working with source packages (Same goes for Ubuntu, Linux-mint and everything that based on Debian) [131018280040] |in Gentoo distro using portage package manager which compiles whole system from source only: Portage Introduction [131018280050] |Slackware distro can compile packages but I don't know is there's any package manager for this in there.. =) [131018280060] |Anyway you can always compile packages manually like Sandy mentionated above =) Also it must be possible to use apt-get or portage pkg managers in any other distro... [131018290010] |I think it's just best to read the documentation coming with that specific program or application that you're wanting to install. [131018290020] |Usually there are readmes/READMEs inside the tarballs (the application source archive which you can usually download) or maybe even INSTALL files to read and learn about what is the preferred way of installing said application. [131018290030] |In short: RTFM ;) [131018300010] |Recently I've started using "Checkinstall" when installing from source outside of my package manager. [131018300020] |It builds a "package" from a 3rd party tarball which can then be installed and managed (and uninstalled) through your package manager tools. [131018300030] |Check out this article - http://www.linuxjournal.com/content/using-checkinstall-build-packages-source [131018310010] |A summary for using the Ports Collection in FreeBSD: [131018310020] |Find Port [131018310030] |Ports are organized by category so if you don't know what category the port is in you have to find it first: [131018310040] |Sometimes there are too many entries that way. [131018310050] |I personally prefer: [131018310060] |Use the * when searching since there are often multiple versions of a port available. [131018310070] |The depth argument ensures your return results aren't needlessly cluttered with matches you are unlikely to want. [131018310080] |Configuration [131018310090] |Often, you'll want to do some configuration; software such as Apache and Postgres practically require it. [131018310100] |There are three main choices: command line, environment and make configuration files. [131018310110] |To get started with the command line: [131018310120] |this will list the default configuration options. [131018310130] |If you like the defaults you are ready to compile and install. [131018310140] |If not, [131018310150] |will bring up a dialog box where you can select which options you want. [131018310160] |(Don't become confused with this and make configure which configures your port with your chosen options!) [131018310170] |This is often sufficient but for some software, like Apache, there is often complex configuration that a simple dialog won't handle. [131018310180] |For this, you also should look at the Makefile(s) which will sometimes give you some additional targets for make that will give you more information. [131018310190] |To continue the Apache example [131018310200] |will give you information on setting up you chosen modules, thread options and the like. [131018310210] |If your port's defaults are mostly fine and you just want to change a few things, you can also just pass key=value pairs like environment variables: [131018310220] |Also, you can set switch options via the -D option: [131018310230] |For complex configuration however the command line won't work well and you're better neither of the first two methods will be effective. [131018310240] |In this case you can make a configuration file and pass that to make with the __MAKE_CONF variable. [131018310250] |FreeBSD has a default configuration file: /etc/make.conf which usually contains information on previously installed ports and other system settings. [131018310260] |To begin, create a file with your ports options, call it ~/myport.mk and then combine that file with /etc/make.conf: [131018310270] |you can then double check your configuration: [131018310280] |and if everything looks good: [131018310290] |BEWARE! [131018310300] |If you need to adjust your configuration settings after make configure or an installation in whole or part you absolutely must clear your configuration first: [131018310310] |Failure to do so will result in unexpected interactions between the ports subsystem, your port's make defaults and your desired configuration. [131018310320] |That's kind of a lot for a summary, but the complexity of configuration is mostly about the app, not the port. [131018310330] |Bash for example, doesn't really have any options. [131018310340] |Installation [131018310350] |This is the easy part: [131018310360] |or you can [131018310370] |which is just more typing. [131018310380] |That's pretty much it. [131018310390] |Obviously there is more you can do such as recursively listing dependencies and configuration options, update with patches and so on. [131018310400] |Here I will refer you to the Ports section of the Handbook, the port subsystem's man page (good info on additional make targets) and the make man page. [131018320010] |Anyone know the meaning of yy? [131018320020] |Since history questions are considered on topic, I figured I'd ask one that's been bugging me for quite some time and no one I know seems to know the answer. [131018320030] |Does anyone know what the historical significance is of the prefix yy? [131018320040] |Most of the variables and functions generated by lex start with it, such as yylval, and yylex(). [131018320050] |I also remember seeing a utility named yyrsa somewhere, but can't find references to it anywhere now. [131018320060] |Inquiring minds want to know! [131018330010] |I think it has something to do with yacc, which has files ending in .y, and requires a function called yylex. [131018340010] |Why is 8 numbers considered a "strong" password? [131018340020] |Just curious as to why 8 numbers are accepted as a password in linux (CentOS if it matters) and not 8 characters on a non-dictionary word. [131018340030] |It's 10 vs 26 per digit after all. [131018340040] |(52 counting CaSe SeNsItIvE) [131018350010] |It all depends on pam settings. [131018350020] |If someone required at least 1 number in password it would match 8 digits but not 8 letters - it is very common requirement as people do choose highly secure password like name of wife ;) and name of wife + number have higher security. [131018350030] |It is not inherit property of Linux or unix and admin can easily change it. [131018360010] |Looking at obscure.c from the shadow-4.1.4.2 sources, we find: [131018360020] |I expect that the design decision had a lot to do with the social engineering factor that forcing people to invent arbitrarily difficult passwords is a great way to have them tape it to their screen. [131018360030] |The password "goodness" measure is coded to do things like [131018360040] |and other such measures if the system builder desires them. [131018360050] |From this I conclude the authors very likely understood what they were doing when considering the issue broadly. [131018370010] |Debian Lenny: Wanna splashy start directly after grubv2... [131018370020] |I'm running a GNU/Linux Debian Lenny with an 2.6.26-2-686 kernel. [131018370030] |I installed splashy for a bootsplash an updated grub to version 2. [131018370040] |Now I want splashy to start directly after the user has selected a startoption in grub. [131018370050] |At the moment, when the user selects a start option a black screen with bootmessages occurs. [131018370060] |I want it so that these messages are displayed in splashy. [131018370070] |Or as an alternative, I want to define a background image which is displayed at this phase. [131018370080] |Do you know a solution? [131018380010] |Maybe you should try plymouth, according to is's page: [131018380020] |...the boot messages are completely occluded. [131018390010] |While plymouth is certainly worth looking into, I use it myself, let's see if we can't actually answer this question for splashy.... [131018390020] |You need to make sure that the keyword splash is passed as a kernel argument at boot. [131018390030] |This can be added at boot time or in the configuration for your bootloader (i.e. /boot/grub/menu.lst for grub, /etc/lilo.conf for lilo, or /etc/default/grub for grub2). [131018390040] |You may also have to enable the framebuffer on your kernel. [131018390050] |To do so, you need to add another parameter to the kernel arguments: vga=791 This argument is most likely unneeded with newer kernels in Squeeze and Sid with KMS enabled. [131018390060] |So the line in /boot/grub/menu.lst might look like this: [131018390070] |Further documentation for splashy can be found here in case you run into other issues: [131018390080] |http://splashy.alioth.debian.org/wiki/faq [131018400010] |bash PS1 setup. [131018400020] |I'm trying to make PS1 configured as follows. [131018400030] |And I have some questions. [131018400040] |
  • What's the difference between "[\e[32;1m]" and \e[32;1m"? [131018400050] |Are they the same?
  • [131018400060] |
  • After running the 'export PS1' command, it works well, but when I give the input around 20 characters, the characters are overwritten as I attached. [131018400070] |What's wrong with this?
  • [131018400080] |
  • What's the meaning of STARTCOLOR(\e[40m)/ENDCOLOR(\e[0m)?
  • [131018400090] |

    ADDED

    [131018400100] |After some tests, I got the following change could solve the problem. [131018400110] |That is, the "\e" format should be replaced by "\[\e" format. [131018410010] |I have a helper function to set prompt, and because I don't want to spend more time for looking for escape code references, I've coded all text color values into it. [131018410020] |You can then do: [131018410030] |Here is the link that explains VT100 terminal codes: http://www.termsys.demon.co.uk/vtansi.htm [131018410040] |\[ - begin sequence of non-printing characters \] - end sequence of non-printing characters [131018410050] |
  • What's the difference between "[\e[32;1m]" and \e[32;1m"? [131018410060] |Are they the same?
  • [131018410070] |Not the same, and it should be \[\e[32;1m\]'. Without[]` it would try to print the sequence in console. [131018410080] |
  • What's the meaning of STARTCOLOR(\e[40m)/ENDCOLOR(\e[0m)?
  • [131018410090] |STARTCOLOR, means set background to black, ENDCOLOR means reset all text attributes, meaning 'give me default console color' [131018420010] |Most modern terminal emulators are able to use ANSI escape codes to control various aspects of the display. [131018420020] |Most of the ANSI codes begin with the 2-character code ESC-[ That is the escape character (ASCII decimal 27) followed by the open square bracket character. [131018420030] |This sequence is also known as the CSI or Control Sequence Initiator. [131018420040] |Because the escape character is not one you can type directly (the Esc key has other, often application specific, uses) bash uses '\e' to refer to it. [131018420050] |Changing the text colour uses the ANSI Set Graphics Mode command: [131018420060] |where can be a list of values separated by semi-colons (;). [131018420070] |Normally just one value is used, although the bold attribute is useful in conjunction with the colour attributes. [131018420080] |Looking at the values listed in Alexander Pogrebnyak's answer, the 0 or 1 before the semi-colon selects bold or not: [131018420090] |There's a useful list of the codes here http://ascii-table.com/ansi-escape-sequences.php [131018430010] |From the bash manual: [131018430020] |\[ begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt \] end a sequence of non-printing characters [131018430030] |\[ and \] are not passed to the terminal. [131018430040] |They tell bash that the characters between them are not going to be printed. [131018430050] |Without them, bash could not know that the sequence following the escape character (e,g. [32;1m) does not take up any space on-screen, which explains why it did not compute the length of the prompt correctly when you left them out. [131018430060] |Note that you haven't been very consistent in your question (or perhaps it's just a mistake with Markdown); you need to have a literal backslash-bracket sequence in $PS1, not just a bracket (which would be displayed literally). [131018430070] |The escape sequences beginning with \e are interpreted by the terminal emulator. [131018430080] |They are documented in the Xterm control sequences (ctlseqs) document (other terminal emulators tend to be mostly compatible). [131018430090] |For example, \e[32;1m switches to bold and green foreground; \e[40m switches the background color to black; \e[0m restores the default attributes. [131018440010] |Change the PS1 color based on the background color? [131018440020] |I learned that I can change the format of PS1, especially the color of the string. [131018440030] |Then, is it possible to change the color based on the background color of the shell? [131018440040] |Or, how can I detect the background color of the shell? [131018450010] |As far as I know, there's no way to query the colors of the terminal emulator. [131018450020] |You can change them with \e]4;NUMBER;#RRGGBB\a (where NUMBER is the terminal color number (0–7 for light colors, 8–15 for bright colors) and #RRGGBB is a hexadecimal RGB color value) if your terminal supports that sequence (reference: ctlseqs). [131018450030] |Powerful color scheme mechanisms often have a dark or light background setting that you must supply to indicate whether you have a black or dark gray background, or a white or light gray background. [131018450040] |When you're configuring for yourself, it's usually enough to decide you'll always use the same background color. [131018460010] |If you are using gnome-terminal you can get the background color for any profile, in this case for the Default profile as [131018460020] |then you can decide how to set your prompt accordingly. [131018470010] |Fedora auto suspend [131018470020] |I am using Fedora 13 and or shutting down and rebooting automatically we have the following commands: [131018470030] |shutdown -h/-r now [131018470040] |Similarly if i want to make my system to go in a suspend mode after sometime what is the command i should use. [131018480010] |The pm-suspend utility (which is contained in the pm-utils package and likely already installed on your machine) is what is used to send your computer into suspend mode. [131018480020] |Thus to suspend "right now," you can run (as root or using sudo): [131018480030] |Unfortunately, pm-suspend does not take a time parameter as far as I know. [131018480040] |However, you could write a wrapper script that took a time parameter. [131018480050] |Save the script somewhere in your $PATH and chmod +x it. [131018480060] |Do not call it "suspend". [131018480070] |A simple one may look like this: [131018480080] |Warning: I have not tested this beyond "works for me". [131018480090] |You may consider changing the suspend command to pm-suspend-hybrid which will also save a hibernation file in case you run out of power while suspended. [131018480100] |Other utilities that you may be interested in or that could make your script more robust are pm-hibernate, pm-is-supported. [131018490010] |The unix philosophy is to have tools that perform one job, plus the shell to combine them. [131018490020] |So we'll combine a suspend command with a do-something-later command. [131018490030] |I don't know what the standard command for suspending is on Fedora (there are several floating around); I'll use pm-suspend which is the usual command on Ubuntu. [131018490040] |To suspend after X seconds: sleep X &&pm-suspend [131018490050] |To suspend at a given time: echo pm-suspend | at HH:MM [131018500010] |Installing Fedora Vs Installing Ubuntu [131018500020] |I am using Fedora 13, and it was my friend who did the installation for me. [131018500030] |I have used Ubuntu also and i found it more easy to install than fedora. [131018500040] |Ubuntu uses a WUBI installer (if i am correct) and its more easy for the users to install and remove ubuntu. [131018500050] |For a person who knows how to install and remove an application in windows, can install/remove Ubuntu also. [131018500060] |Why is it that its not the same with Fedora. [131018500070] |Are there any steps being taken for making more user friendly to customers. [131018510010] |Most of the significant (including Fedora and Ubuntu) distributions prefer to install from a boot cd-rom or usb-stick these days. [131018510020] |Windows need not part of the process at all. [131018510030] |Wubi is a windows application that can run Linux from a Windows file pretending to be a boot disk. [131018510040] |Its purpose is to be have zero-impact on the Windows system: [131018510050] |You keep Windows as it is, Wubi only adds an extra option to boot into Ubuntu. [131018510060] |Wubi does not require you to modify the partitions of your PC, or to use a different bootloader, and does not install special drivers. [131018510070] |It works just like any other application. [131018510080] |The Fedora LiveCD and Ubuntu LiveCD and even the tiny DSL LiveCD are the simplest installation methods. [131018520010] |Ubuntu install will be basically boot from live cd/usb stick, click through some GUI screens and you are done. [131018520020] |Only thing to mind is the partitioning which if you want a clean install without dual booting or dual boot with windows is trivial. [131018520030] |(If you already have windows installed.) [131018520040] |In fact, I just did a f13 install the other day and it was eerily similar to ubuntu installs. [131018530010] |processing $0 in bash. [131018530020] |The $0 variable contains the path info of the script. [131018530030] |
  • How can I change the path info to absolute path? [131018530040] |I mean how to process ~, ., .. or similar?
  • [131018530050] |
  • How can I split the path info into directory and file name?
  • [131018530060] |I could use python/perl for this, but I want to use bash if possible. [131018540010] |You don't need to process things like ~, the shell does it for you. [131018540020] |That's why you can pass ~/filename to any script or program and it works -- all those programs don't handle ~ themselves, your shell converts the argument to /home/username/filename and passes that to the program instead: [131018540030] |If you need a canonical filename (one that doesn't include things like ..), use realpath (thanks Neil): [131018540040] |As for splitting the path into directory name and filename, use dirname and basename: [131018550010] |Using dirname and basename like mentioned by Michael should be the safest way to get what you want. [131018550020] |Anyway if you really want to do this with "bash only tools" you could use parameter substitution: [131018550030] |This example is directly taken from the Advanced Bash Scripting Guide which is worth a look. [131018550040] |The explanation is pretty simple: [131018550050] |${var#Pattern} Remove from $var the shortest part of $Pattern that matches the front end of $var. ${var##Pattern} Remove from $var the longest part of $Pattern that matches the front end of $var. [131018550060] |Look at the pattern like some regex and the # or ## as some kind of greedy/non-greedy modifier. [131018550070] |This might become useful if you will have to do some more complicated extractions of a paths part. [131018560010] |realpath is a command which tells you the real path (removes .. and symbolic links etc.) [131018560020] |It is standard with FreeBSD. [131018560030] |According to this discussion it's also available for Linux: [131018560040] |http://www.unix.com/shell-programming-scripting/89294-geting-real-path.html [131018560050] |That discussion also offers a bash solution: [131018570010] |How do I set the default ftp root folder for an Ubuntu user connecting to VSFTPD? [131018570020] |How do I set/change the default ftp root folder for a specific user? [131018570030] |I want to be able to create a developer account that homes to different sites on a development box depending on what is currently being worked on. [131018570040] |EDIT: The server is running Ubuntu and vsftpd. [131018580010] |you can try -s option to run a text file containing FTP commands. [131018580020] |In that command file you can easily lcd to different dirs. [131018580030] |Please note this option is not available on all the Unix servers. [131018580040] |Can elaborate on your requirement like which OS Win Or *nix etc ??. [131018590010] |If you specify the user_config_dir in vsftpd.conf, you can set any config option on a per-user basis. [131018590020] |From man vsftpd.conf: [131018590030] |This powerful option allows the override of any config option specified in the manual page, on a per-user basis. [131018590040] |Usage is simple, and is best illustrated with an example. [131018590050] |If you set user_config_dir to be /etc/vsftpd_user_conf and then log on as the user "chris", then vsftpd will apply the settings in the file /etc/vsftpd_user_conf/chris for the duration of the session. [131018590060] |So, setting local_root in this way to the desired directory changes the FTP root for just that user. [131018600010] |flock(2) versus fcntl(2) over a NFS [131018600020] |The Perl 5.x documentation states that its implementation of flock(..) will use one of the following native calls, starting at 1 and working toward 3 if unavailable: [131018600030] |
  • flock(2)
  • [131018600040] |
  • fcntl(2)
  • [131018600050] |
  • lockf(3)
  • [131018600060] |That's fine. [131018600070] |However, you may have noticed their disclaimer that flock(2) should not be used over a NFS. [131018600080] |The doc suggests using a -Ud_flock flag to force Perl to use flock(2). [131018600090] |The man page of flock(2) (on Redhat) states a similar disclaimer about NFS issues. [131018600100] |My question is, why!?!? [131018600110] |I can't seem to find an in-depth article or explanation of WHY flock(2) is unsafe over a NFS. [131018600120] |I've written several test scripts in C and Perl, on both Redhat (where flock(2) is being used) and on Solaris (where fcntl(2) is being used). [131018600130] |I ran strace/truss to ensure that Perl was indeed using flock(2) and fcntl(2) respectively. [131018600140] |I could not replicate any issues where a lock was not being honored! [131018600150] |What gives?? [131018610010] |I'm pretty sure that you are looking at legacy concerns. [131018610020] |Recall that the Perl5 manual was released in 1994 and that it was just an edit of Perl4's manual from 1991. [131018610030] |In those days it could probably be said about the oft-named Nightmare File System that "it isn't how well the bear dances that amazes, but that it dances at all". [131018610040] |NFS2 in the 1991 epoch was slowly crawling out of Sun into other platforms and was relatively crude. [131018610050] |The security model was essentially non-existent (root on a client machine could read the full contents of an NFS mount) and locking - via nfs.lockd - was this side of experimental. [131018610060] |You would have been foolish to expect flock semantics to work properly if at all between two different allegedly interoperable implementations. [131018610070] |Coax was the dominant Ethernet PHY at the time which many network users have never had the displeasure of using (what do you mean you forgot to put the 50 termination resistor on?) if that gives you a better grip on the state of intranets then. [131018610080] |Larry Wall and crew had every reason to make pessimistic assumptions about the correctness of NFS locks at the time, and this is the sort of defensive programming that future code jockeys are loathe to remove because it is so hard to prove the absence of a defect by removing old code which is re-introduced in interoperability with a legacy system that you never even heard of. [131018610090] |Since then, NFS has improved considerably, and lockd has migrated in time to a feature of the Linux 2.6 kernel. [131018610100] |For a collection of 2003+ systems, NFS file locking can probably be trusted, especially if tested well within your application across the many platforms it may be running on. [131018610110] |All of the above was cribbed from memory, and could likely be substantiated through research (e.g. http://nfs.sourceforge.net/) but the proof - as they say - is in the locking, and if you didn't test it, it is presumed broken. [131018620010] |Lennart Poettering recently did some digging into linux filesystem locking behaviour, which doesn't paint a particularly rosy picture for locking over NFS (especially the follow-up he links to at the bottom of the post). [131018620020] |http://0pointer.de/blog/projects/locking.html [131018630010] |Symbolic link and hard link questions [131018630020] |Let's say /A/B/c.sh is symbolic linked to /X/Y/c.sh. [131018630030] |
  • If c.sh has the command "./SOMETHING", '.' means /A/B/ or /X/Y/?
  • [131018630040] |
  • How about the hard link?
  • [131018640010] |. is actually the current working directory in either case; it has nothing to do with the directory holding the script: [131018650010] |I agree with Michael, but one place where it may matter is $0 parameter. [131018650020] |I've seen scripts that investigate the name of $0 and do different things based upon what symbolic name is used. [131018660010] |The . in this case means the current working directly; the links' paths are irrelevant. [131018660020] |Referencing the file for execution or editing is essentially the same regardless of the type of link, even though there are several differences between them. [131018670010] |Although this isn't what you asked, it may be what you're looking for ... [131018670020] |You can use "$0" as a way to locate sub-scripts that are located in the same directory as the main script. [131018670030] |Since the main script is symlinked you need to dereference $0 first, using realpath. [131018670040] |Although dirname returns . if its arg has no directory part, in this example realpath will already have turned the arg into an absolute path. [131018680010] |resource for understanding the kernel and drivers [131018680020] |Possible Duplicate: Linux Kernel: Good beginners' tutorial [131018680030] |What resources are available if I wanted to develop, change and understand device drivers. [131018690010] |OS X: how to keep the computer from sleeping during a ssh session [131018690020] |When I ssh into a OS X computer on my network the session lasts until the OS X goes into sleep mode. [131018690030] |Is there a way to prevent this from happening during my SSH session? ( Appart from physically bumping the mouse or typing keys, or manually disabling the sleep function ) [131018690040] |EDIT: The ssh session would normally be a simple sshfs mount. [131018700010] |This is not a out-of-the-box solution but it will possibly work if no one other comes up with a solution :-) [131018700020] |You can manipulate the power management settings with the command pmset. [131018700030] |See the manpage for more information about it. [131018700040] |The interesting setting we want to manipulate is sleep: [131018700050] |sleep - system sleep timer (value in minutes, or 0 to disable) [131018700060] |So we can use the following commands: [131018700070] |Now we have to trigger these commands after a login and logut. [131018700080] |If I remember this right, Bash is the default shell for Mac OS X which brings us to these two files: [131018700090] |Edit or create them in your home directory and add the appropriate commands. [131018700100] |If you want, save the current sleep value in a temporary file and restore it from it afterwards. [131018700110] |The last problem to solve is the password prompt of sudo. [131018700120] |To give your user the permission to invoke pmset without any password, edit your /etc/sudoers with sudoedit. [131018700130] |You need to use the NOPASSWD tag. [131018700140] |If this is new for you, have a look at the sudoers manual. [131018710010] |A web search for mac prevent sleep comes up with a lot of good information on utilities and tricks you can use. [131018710020] |However, in your situation, I suspect that what you really want to do is to have a separate ssh session that runs a very simple command-line program that prevents the Mac from sleeping. [131018710030] |Apple gives the complete source code in Technical Q&A 1160: Preventing sleep. [131018710040] |(Unfortunately, this page is broken on Apple's site at present, but you can pull up a copy from Google's cache.) [131018710050] |I haven't tested to see if this program can be run successfully from an ssh session, but it looks like it could. [131018710060] |Note that sshfs implementations typically don't stay logged into the remote system all the time, and just run a session when they want a file or directory. [131018710070] |So you need a separate ssh session, running in a minimized terminal window, to run the insomnia program. [131018710080] |The advantage of this approach is that you don't need to use sudo, or mess with the system power management settings. [131018720010] |Why does the substitution of newlines using g/re/p only apply to every other line in Vim? [131018720020] |Consider: [131018720030] |Then: [131018720040] |unexpectedly returns: [131018720050] |when I want newlines after every bar. [131018720060] |Yet: [131018720070] |returns: [131018720080] |as expected and if there are newlines between all the bars I get what I expect--an extra newline after every line containing bar. [131018720090] |I solved the problem by via: [131018720100] |but I'd really like to have a better understanding on what's going on. [131018730010] |To understand what's going on at this level of detail, I think you have to reach for the source code. [131018730020] |So, citing vim-7.1.314/src/ex_cmds.c (comment at the head of the definition of ex_global()): [131018730030] |This is implemented in two passes: first we scan the file for the pattern and set a mark for each line that (not) matches. secondly we execute the command for each line that has a mark. [131018730040] |This is required because after deleting lines we do not know where to search for the next match. [131018730050] |Thus, if you delete a line in the :g'ed command, the mark for this line goes away too. [131018730060] |If you create a line in the :g'ed command, it won't be marked, so the command won't execute on it. [131018730070] |In your example, line 3 is joined with line 2, so the mark on line 3 disappears, and the next mark after line 2 is the one originally put on line 4. [131018730080] |Also, ex_global sets the global_busy variable, which causes a few commands, including :s, to behave slightly differently. [131018730090] |I don't think the difference is relevant here though. [131018740010] |Gilles has answered your “why?”, but I thought you might be interested in a simpler way to do what you wanted. [131018740020] |I want newlines after every bar. [131018740030] |Instead of replacing the line terminating character (thus joining the lines and losing the internal mark, per Gilles’ answer), add a blank line right before the end of each matching line. [131018740040] |Use $ to get a zero-width match right before the end of the line and insert a line break there (also no need for the /g modifier since the pattern will only match once per line). [131018750010] |Has anyone got any performance numbers comparing IIS and .NET to Cherokee and Mono? [131018750020] |I am setting up a development server and want to set it up to serve ASP.NET pages using Mono. [131018750030] |I am planning on using Cherokee and Mono (http://www.cherokee-project.com/doc/cookbook_mono.html) and wondered if anyone had done any performance testing comparing the Unix based stack to the Windows based. [131018760010] |This is kinda a non-answer. [131018760020] |However there isn't a real answer here. [131018760030] |Unfortunately this thing is highly application-dependent. [131018760040] |Your app could hit on something that Mono happens to do really well or you might be heavily using something that is implemented poorly or has some bugs. [131018760050] |It's not really a case of Mono being X times slower/faster than IIS. [131018760060] |My suggestion is to take your app, deploy it on two different EC2 instances (one Windows and one Mono) and do your testing there. [131018760070] |If you find major issues on the Mono instance, please report them and we'll try to get things improved. [131018760080] |All that being said, I can tell you from personal experience that Mono aspx does perform really well. [131018770010] |When testing Mono/Linux vs .NET/Windows workloads, you have to remember that there is more at play than just the runtime environment. [131018770020] |There are areas in which Linux performs better than Windows (Most IO and network operations tend to be faster for comparable C programs). [131018770030] |At the same time, .NET has a more advanced garbage collector and a more advanced JIT compiler. [131018770040] |When it comes to the class libraries, it really depends on what code paths you are using. [131018770050] |As JacksonH said on a previous post, you can hit code paths that have been optimized in one implementation, but not on the other, and viceversa. [131018770060] |On ASP.NET workloads you have to remember that the default setup will route all incoming requests to a single "worker" process, mod_mono and Cherokee use a similar approach: [131018770070] |At least with Apache we support a mechanism where you can divide application workloads across multiple workers, which helps under high loads as it avoids any in-process locking and gives each worker a whole thread pool to work from: [131018770080] |The details on how to configure this setup are available here: [131018770090] |http://mono-project.com/Mod_mono [131018780010] |Is there a Linux utilities repository online accessible from a web browser? [131018780020] |Is there some place which has the collection of all the latest Linux utilities (something like filehippo.com for Windows utilities)? [131018780030] |I know I can use various download utilities similar to yum, each of which would have their own repositories. [131018780040] |I am wondering if there is any repository for Linux maintained anywhere that lets me download these utilities right from the browser? [131018790010] |It would vary by distro. [131018790020] |For example: [131018790030] |
  • Ubuntu has Ubuntu Packages
  • [131018790040] |
  • Gentoo has Gentoo Packages
  • [131018790050] |
  • Arch has Arch Package Database and AUR
  • [131018790060] |
  • Fedora has Fedora Package Database
  • [131018800010] |All package management systems such as apt, yum, etc. download the packages from the Internet (usually from the web, sometimes via FTP). [131018800020] |You can find out where your system's package manager looks for its downloads and go there. [131018800030] |Many distributions have a web interface where you can find information about packages (search, browse changelogs, see bug reports, etc). [131018800040] |For example http://packages.debian.org/ for Debian, http://www.freebsd.org/ports/index.html for FreeBSD, etc. [131018800050] |Note that since there are many variants of unix, a single binary won't work on all of them. [131018800060] |So usually you need to find a binary that's compiled for your distribution and architecture. [131018800070] |Most places that collect such binaries are specific to one distribution. [131018800080] |There are a few distribution-independent places with collections of free software for unix, but they tend to have only sources. [131018800090] |The biggest one I'm aware of is Freshmeat. [131018810010] |Many software projects have their hosting and packages for various distributions at Sourceforge or Google Code. [131018810020] |Also Freshmeat could be considered as a directory for free software. [131018810030] |The most of them let you download packages for your distribution or checkout/clone their repository. [131018810040] |But I see no reason why somebody should maintain such a site. [131018810050] |There is no advantage in using your browser for download some utilities if you want to install them in your system afterwards. [131018810060] |This is exactly the job of the packet management of your distribution and most of them do a good job at this :) In most cases there exist various wrapper tools for package management in all flavors. [131018810070] |For example in Arch there is the basic package management tool called pacman. [131018810080] |If I want to install utility foo, I simply have to type pacman -S foo and everything is installed. [131018810090] |If tehre are not enough packages provided I can search in the AUR (Arch User Repository). [131018810100] |I can use yaourt which is a wrapper for pacman or simply browse the AUR Homepage. [131018810110] |On Debian you use apt-get or aptitude, etc. [131018810120] |The FreeBSD packages are accessible also via the system tools or via the Ports Homepage. [131018810130] |So why should I want to browse any other homepage to download any utilities? [131018810140] |If my package repository doesn't have it, its most of the time a small and unpopular project which I most likely won't find at any other "repository" on the net. [131018810150] |Addition after short Discussion with Warren Young (see comments): My point is not, that every tool/software/utility has to exist as a package and if not, that it is not "useful". [131018810160] |The point is, that if you are missing a tool, you know what you want and can get it directly from the project page. [131018810170] |The most project pages are hosted on the named services (sf,gc,fm) which provide you with a basic directory, so in combination with your package management and the project pages there should be no need for such a repository. [131018820010] |Custom autocomplete [131018820020] |Is there a way to have bash know exactly what to display when you double tab? [131018820030] |For example I have a python script scpy which requires a couple arguments. [131018820040] |For example like apt-get, if you double tab gives you [131018820050] |Is there a way to do that for your own scripts/programs? [131018820060] |Do I need to wrap my python script in a bash script? [131018830010] |The easiest way of doing this is to include a shell script in /etc/bash_completion.d/. [131018830020] |The basic structure of this file is a simple function that performs the completion and then invocation of complete which is a bash builtin. [131018830030] |Rather than go into detail on how to use complete, I suggest you read An Introduction to Bash Completion. [131018830040] |Part 1 covers the basics and Part 2 gets into how you would go about writing a completion script. [131018830050] |A denser description of bash completion can be found in the "Programmable Completion" section of man bash (you can type "/Programmable Completion" and then press 'n' a few times to get there quickly. [131018830060] |Or, if you are feeling luck, "g 2140 RETURN"). [131018840010] |Trying to use an old pc for home theater PC, need advice! [131018840020] |Hi all, I'm trying to use some fairly old hardware (purchased ~2000) to have hooked up to my tv full time and I'm looking for advice on which distro to use. [131018840030] |I've listed my hardware specs and my requirements. [131018840040] |Hardware Specs: [131018840050] |
  • Pentium 4 1.9 GHz
  • [131018840060] |
  • 765 mB RAM
  • [131018840070] |
  • NV11 [GeForce2 MX/MX 400] Graphics Card
  • [131018840080] |
  • more available if needed, not sure what else to list though
  • [131018840090] |Machine Requirements: [131018840100] |
  • Capable of streaming music/video (a functional browser)
  • [131018840110] |
  • Easy install
  • [131018840120] |
  • Able to play out to a large screen (~30 inches)
  • [131018840130] |
  • Capable of playing dvd's/music from harddrive.
  • [131018840140] |
  • Extras that would be nice: ssh support, able to run as a LAMP, able to run as a File Server.
  • [131018840150] |Based on my machine requirements and hardware specs, is this possible? [131018840160] |Which distro would you recommend? [131018840170] |I would ideally use Ubuntu Server+GUI but I've been having trouble with the install and I think it may be due to the aging hardware. [131018840180] |-- [131018850010] |Rather than start from a standard Linux distro and add what you need on top, this sort of task calls for a specialized distro. [131018850020] |I've heard good things about LinuxMCE. [131018850030] |It's a customized version of Kubuntu with many media center and home automation add-on packages, all nicely integrated into a coherent whole. [131018860010] |The general purpose distros have had a lot of recent improvements that have started to cut out the older hardware. [131018860020] |Linux may still technically support 486 CPUs, but you're not going to get anything like graphics support out of it, which is what most users are looking for these days. [131018860030] |Hopefully your TV can take a VGA input, otherwise you'll be stuck with SD. [131018860040] |That P4 is still more performant than the previous generation of Atom processors, so that should give you some benchmark to base expectations on. [131018860050] |Flash on Linux is still pretty CPU-hungry, so you're probably not going to be able to watch more than 480p video even if you can crank your screen size higher then that. [131018860060] |DVD playback can be done in software, but may be complex enough to push that box to its limit. [131018870010] |My to be HTPC has similar specs, although I don't have the Ge-force card. [131018870020] |Until now I've been using my Netbook as an ad-hoc HTPC, with lower specs, and good results. [131018870030] |So given your specs and requirements, yes it's more than possible. [131018870040] |Dig around the GeeXboX site and see if you like :-) [131018880010] |Is there a standard symbolic link to the current users home directory? [131018880020] |The shell can expand ~ to your home directory. $HOME usually has the same deal, but often you want to refer to the current users home directory from a context that may not support such expansion. [131018880030] |I have had config files where $HOME works but ~ doesn't and vice versa. [131018880040] |I would guess that fuse could provide something along these lines, something like /var/myself -> $HOME [131018880050] |With that I could place values in config files to point to things like /var/myself/backdrops/pornography/wtf/yarly.jpg [131018880060] |Is there something like this already? [131018880070] |If not, are there good reasons for there not being something like this? [131018890010] |You can get the pid of requestin user and ask system of his/her home directory. [131018890020] |So it is possible. [131018890030] |However I'm not sure if there is no SUID programs that would assume FS as static. [131018890040] |Edit: [131018890050] |The above code is not ideal but should do work for common cases. [131018900010] |I understand your concern but the answer is "no" there is not such thing. [131018900020] |The usual is ask the OS the user's home path, or get the $HOME variable. [131018900030] |All these options needs always some coding from the application. [131018900040] |A lot of apps, like bash, offer the "alias" ~ (open(2) does not translate that). [131018900050] |Of cuorse a vfs or a fuse module could be implemented to do this. [131018900060] |Probably there is something to do that, I am going to ask that! [131018900070] |But is it really needed? [131018900080] |You can use a workaround like: [131018900090] |
  • Create an script to start the program that links the $HOME to a relative path or a known location.
  • [131018900100] |
  • Use pam_exec to link the $HOME dir to a known location http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/sag-pam_exec.html
  • [131018910010] |One trick (on Linux, at least) would be to change directory to $HOME before running the app and then use /proc/self/cwd/... in the config file. [131018920010] |Most programs allow you to specify a path to the configuration file on the command line. [131018920020] |So you could write a wrapper that takes a standard configuration file, filters it to substitute things like $HOME for the current user, and then passes the modified, temporary configuration file to the program. [131018930010] |What's the difference between a,b=b,a+b and a=b b=a+b? [131018930020] |I'm trying to understand the difference between [131018930030] |and [131018930040] |I found Equation 1 in the Python tutorial for an example of a program to print out Fibonacci numbers. [131018930050] |Equation 1 produces the correct sequence [1, 2, 3, 5, 8, etc.] while Equation 2 produces the wrong sequence [1 2 4 8 16]. [131018930060] |I'm assuming that Equation 2 is a shorthand for Equation 1, by analogy with the form "a,b=0,1" (a=0 b=1). [131018940010] |This would probably be better on stack overflow, but since I can't close things, here is something to consider. [131018940020] |A good way to figure out how these are different is to step through the process and see what is happening. [131018940030] |Starting with our initial conditions, we can do the first step with equation 1: [131018940040] |Now consider doing the same with equation 2: [131018940050] |The equations on the right hand of the assignment are evaluated before any assignment takes place. [131018940060] |Thus, when you do a, b = b, a+b, the a+b is evaluated before b is assigned to a. [131018940070] |So in equation 1 you are getting b = 0 + 1, while in equation two, b is being assigned to a first, yielding b = 1 + 1. [131018950010] |Is there a linux vfs tool that allows bind a directory in different location (like mount --bind) in user space? [131018950020] |For a user process, I want to mount a directory in other location but in user space without root privilieges. [131018950030] |Something like mount --bind /origin /dest, but with a vfs wrapper. [131018950040] |Like a usermode fine-tuned chroot. [131018950050] |The program would wrapper the syscalls to files to "substitute" the paths needed. [131018950060] |It could be called with a command line like: [131018950070] |bindvfs /fake-home:/home ls /home [131018950080] |I am sure that this alredy exists! :) [131018960010] |VFS already allows for non-root mounting of filesystems. [131018960020] |You can add the user or users option to the fstab entry and make sure vfs.usermount=1 is in /etc/sysctl. [131018960030] |None of this will give you chroot-like controls however. [131018960040] |The bind option isn't going to change permissions or allow for an 'alternate' access, this is a second mtab entry for the same exact filesystem and contents. [131018960050] |Modifications in the bind mount affect the original. [131018960060] |I'd make sure you clarify your end goal before moving further. [131018970010] |The parrot program can do what you ask for: intercept system calls and redirect certain paths (e.g., all paths starting with /anonftp are remapped to transparently access remote files over FTP). [131018970020] |It also runs entirely in userspace. [131018970030] |However, despite an impressive array of common and uncommon network protocols, parrot does not have any module to do simple filesystem->filesystem rewriting like you ask for. [131018970040] |That should be quite simple to add, if you know some C language programming. [131018970050] |Alternatively, bindfs (which runs on top of FUSE), works like a mount --bind in userspace. [131018970060] |(But this goes in the reverse direction relative to re-directing /home to /fake-home as you mention in your question.) [131018980010] |what does libgcc_s.so contain? [131018980020] |This might sound like a really dumb question but I'm really in the shade here. [131018980030] |I tried running objdump on the lib to figure it out but no success. [131018980040] |Is there a way (a command) to find out what a library does(what is it good for)? [131018990010] |It's GCC's runtime library, which contains some low-level functions that GCC emits calls to (like long long division on 32-bit CPUs). [131019000010] |Why is vimdiff so slow? [131019000020] |Whenever I'm in vimdiff mode, my PuTTY session slows down considerably. [131019000030] |This is especially noticeable when I try to scroll down many lines by keeping j pressed. [131019000040] |
  • Why is it so slow in comparison to regular Vim running under vsplit? [131019000050] |I realize it matches across lines and color-highlights the differences, but does that account for all the speed differences?
  • [131019000060] |
  • What can I do about it? [131019000070] |I would like to keep line matches across files (and color highlighting as well). [131019000080] |What settings can I change to help speed up the performance of my vimdiff utility?
  • [131019010010] |Do you have SSH compression enabled? [131019010020] |This kind of scrolling and highlighting generates a whole lot of redundant terminal traffic; compression makes a huge difference over a remote link. [131019020010] |Specifying which git repo to use [131019020020] |I have a bash script, called cgit, that acts as git for one specific git repo ( located at /.git ): [131019020030] |I use it to keep track of imporant system files [131019020040] |The trouble is when I try to add content relative to the directory I'm in, for example: [131019020050] |cgit would try and add /.ssh/config to the /.git repo. [131019020060] |Any suggestions for to work-around? [131019030010] |I'd update your cgit script to use --git-dir. [131019030020] |From the man pages: [131019030030] |So it would become: [131019030040] |FTR, I haven't tried it. [131019040010] |Much better to use etckeeper instead of your own home-grown solution. [131019040020] |There are number of gotchas that etckeeper addresses nicely. [131019040030] |It's a package in several Linux distros, and is also easy to build from source. [131019040040] |A really nice feature on Debian-based systems is that it ties in with apt to automatically commit before and after installing or updating packages. [131019040050] |This helps distinguish changes you made yourself from ones made by package updates. [131019040060] |It also (optionally) does a daily autocommit. [131019050010] |How to remove a file from the git index [131019050020] |How do I remove a file from a git repositorie's index without removing the file from the working tree? [131019050030] |If I had a file ./notes.txt that was being tracked by git, I could run git rm notes.txt. [131019050040] |But that would remove the file. [131019050050] |I'd rather want git just to stop tracking the file. [131019060010] |You could just use git rm --cached notes.txt. [131019060020] |This will keep the file but remove it from the index. [131019070010] |Keep tracking of the symbolic links? [131019070020] |I use symbolic link quite often, but after my moving the original file, I lose the track of the symbolic link. [131019070030] |I also use symbolic link for keeping track of some files in the same directory, but again, I lose the track. [131019070040] |
  • Is there any way (tool/method) to keep track of the symbolic link no matter what change I make? [131019070050] |Is the hard link the only way for this?
  • [131019070060] |
  • Is there any way to make the symbolic link in a relative way so that when I move the directory that contains both the original and link, the link should work.
  • [131019080010] |Concering your second question, if you make the symlink using a relative path and then move the whole directory structure, it still should work. [131019080020] |Consider the following terminal session: [131019080030] |As for your first question, if you really want a link that will refer to the same file no matter what you do with the original location of the file, a hard link is probably what you want. [131019080040] |A hard link is basically just another name refering to the same inode. [131019080050] |Thus, there is no difference between the hard link and the "original file." [131019080060] |However, if you need to link across file systems, hard links often do not work and you usually cannot make hard links to directories. [131019080070] |Further, you will notice some differences when performing some file operations. [131019080080] |Most notably, removing the original will not remove the file. [131019080090] |The hard link will still point to the file and be accessible. [131019090010] |You can use the readlink command to discover/check what a (now broken) link originally pointed to and then fix it up to point to whatever the file's new name/location is. [131019090020] |Or you could do your file renaming only through a shell script that renames a file and searches for and fixes any links that point to it. [131019090030] |Either way, you would so something like this: [131019090040] |However, if you're doing this all the time then symlinks are probably not what you should be using. [131019100010] |Tool in UNIX to subtract dates [131019100020] |Is there any tool in Solaris UNIX (so no GNU tool available) to subtract dates? [131019100030] |I know that in Linux we have gawk that can subtract one date from another. [131019100040] |But in Solaris the maximum we have is nawk (improved awk) which cannot perform date calculations. [131019100050] |Also I cannot use perl... :( [131019100060] |Is there anyway to do dates calculations like 20100909-20001010 ? [131019100070] |UPDATE: Is bc able to perform dates calculations? [131019100080] |Thanks guys [131019110010] |I would try using the date command which is part of POSIX so it is just about everywhere. [131019110020] |UPDATE: Unfortunately, it seems that -d is not part of POSIX date and likely not there on Solaris. [131019110030] |Thus this likely won't answer the OPs question. [131019110040] |Now d1 and d2 are integers that correspond to seconds since the unix epoch. [131019110050] |Thus to get the difference between the two, we subtract( $((d1-d2)) in bash) and covert to whatever units we want. [131019110060] |Days are the easiest: [131019110070] |How to do the conversion will likely be different if you don't have bash. [131019110080] |The most portable way may be to use expr (expr's posix man page). [131019120010] |Install some tools! [131019120020] |I mean, it is Solaris, what do you expect? ;) [131019130010] |Unfortunately, none of the POSIX command line utilities provide arithmetic on dates. date -d and date +%s are the way to go if you have them, but they're GNU extensions. [131019130020] |There's a clumsy hack with touch that sort of works for checking that a date is at least n days in the past: [131019130030] |(Note that this code may be off by one if DST started or stopped in the interval and the script runs before 1am.) [131019130040] |Several people have ended up implementing date manipulation libraries in Bourne or POSIX shell. [131019130050] |There are a few examples and links in the comp.unix.shell FAQ. [131019130060] |Installing GNU tools may be the way of least pain. [131019140010] |If you have Python: [131019140020] |You tagged your question "gawk" but you say "no GNU tools". [131019140030] |Gawk has date arithmetic. [131019150010] |Perl is likely to be installed, and it's easy enough to grap modules from CPAN, install them in your home directory, and refer to them in your program. [131019150020] |In this case, the Date::Calc module has a Delta_Days subroutine that will help. [131019160010] |Here is an awk script I just wrote up, should work with an POSIX awk. [131019160020] |You'll have to try the Solaris version; remember that there are two versions of Awk on Solaris as well, one in /bin and one in /usr/xpg4/bin/awk (which is nawk, I believe). [131019160030] |Pass a YYYYmmdd date string through and it will be converted to number of seconds since the Epoch (with a bit of give for being on day boundaries). [131019160040] |Then you will be able to subtract the two. [131019170010] |python: [131019180010] |How can I customize new mail notification in alpine? [131019180020] |I use alpine as my primary mail reader. [131019180030] |While I spend most of my day in the terminal or emacs, it would still be nice to get pretty notification of new mail using notify-bin. [131019180040] |Is there any way I can configure alpine to run a custom command when new mail is recieved? [131019190010] |Can't you use a specialized mail-notification tool like Gnubiff, mail-notification or kbiff? [131019200010] |It is not possible to customize the "new mail notification" of alpine. [131019200020] |There is no such option mentioned in the configuration documentation. [131019200030] |Also here is a quote from the mailinglist from Eduardo Chappa: [131019200040] |I've noticed that alpine gives a visual alert in gnome terminal by flashing the screen, when a new mail arrives. [131019200050] |Is there any way to customise the alert, so that, for instance, it plays a sound or something? [131019200060] |[..] [131019200070] |Alpine, as you can guess now, will only beep. [131019200080] |In Web Alpine it is possible to send a file to be played (to the browser) for new mail notification. [131019200090] |There is no such feature in Unix, Mac or Windows Alpine. [131019200100] |Your options are now: [131019200110] |
  • write a feature request to the alpine-info mailinglist
  • [131019200120] |
  • get the sources and write a patch
  • [131019200130] |
  • use an external tool like Mail Notification
  • [131019200140] |And to quote the developer of my favorit mail client: [131019200150] |All mailclients suck... [131019200160] |:-) [131019210010] |There is "NewMail FIFO Path" configuration option in alpine. [131019210020] |Quoting the help: [131019210030] |You may have Alpine create a FIFO special file (also called a named pipe) where it will send a one-line message each time a new message is received in the current folder, the INBOX, or any open Stayopen Folders. [131019210040] |To protect against two different Alpines both writing to the same FIFO, Alpine will only create the FIFO and write to it if it doesn't already exist. [131019210050] |So, I set the option to '/tmp/alpine.fifo', and wrote simple utility to read messages from the FIFO and invoke 'notify-send': [131019210060] |Save it to alpine-notifier.c, and compile with 'gcc alpine-notifier.c -o alpine-notifier' command. [131019210070] |Start 'alpine-notifier' after alpine is started. [131019210080] |Enjoy pop-up notifications. [131019220010] |Manage Windows in different ways on different workspaces/activites in KDE 4.5 possible? [131019220020] |I'm just wondering if it's possible to on one workspace (or maybe activity) to use Tiling, and use normal management on another, maybe change some grouping settings on another. [131019230010] |as of KDE SC 4.5 this is not possible. [131019240010] |Why do some commands have man pages and other commands use --help? [131019240020] |Wouldn't it be more consistent if one of these always worked? [131019250010] |Yes it would be more consistent. [131019250020] |This has nothing to do with cross-platform and everything to do with developers not writing (wanting to write) documentation. a man page is documentation, --help is mostly programming in nature. [131019250030] |I've also seen the case where man pages didn't exist because the developer didn't know how to make one, or convert the documentation from another format, sometimes this is easily remedied. [131019250040] |I would like to note that I wish both always worked. [131019260010] |Some people do not like man pages, and write info files instead (notably GNU). [131019270010] |I agree with xenoterracide - both would be nice... [131019270020] |I expect --help to report functionality and options real short. [131019270030] |I expect man (or info...) to describe in detail what was previously reported by --help, maybe provide some examples, background etc. [131019280010] |missing man pages of some commands [131019280020] |Typing man alias gives me [131019280030] |No manual entry for alias [131019280040] |The same thing goes for export and eval. [131019280050] |At first I thought it only happens to shell built-in commands but man echo gives me the man page. [131019280060] |Except for googling, is there a way that I can view the documentation of those commands? [131019280070] |If not, is there a way to "install" those missing man pages? [131019290010] |You probably have the man page for echo because most systems have an echo binary in /bin, even though most shells provide a built-in anyway; you're seeing the man page for that binary. [131019290020] |The man pages for all the other commands you're missing are in the POSIX Programmer's Manual (man section 1P). [131019290030] |How to install it will depend on your distro; on Gentoo they're in the sys-apps/man-pages-posix package [131019300010] |man information for built-in commands usually are available in the related shell man page. [131019300020] |Try man bash. [131019310010] |alias, export, and eval are all part of man builtin on Mac OS X and, I assume, on other BSD systems. [131019310020] |On OS X, the man pages for the builtin commands are all aliased to builtin, so if I type man alias it will pull up man builtin. [131019310030] |The problem though is that man builtin doesn't really provide information on the individual commands. [131019310040] |Therefore, to get info on alias, you have to use help alias. [131019310050] |While I prefer reading man pages from a terminal prompt, if missing from a system, I'll go to http://man.cx/ as it's pretty comprehensive. [131019320010] |Builtin commands can be easily found by checking the man page of your current shell: [131019320020] |In the man page of bash, you'll find: [131019320030] |When in doubt, run which alias when it reports builtin, or it can't be found in $PATH, there's a good chance it's a builtin, so check the appropriate man pages. [131019330010] |You can get information about bash built-in commands with help, for example help alias or help export. [131019340010] |Unix - Filter Commands [131019340020] |Hello all, [131019340030] |I want to know about the "filter-command" which are available in Unix. [131019340040] |I am confused regarding this: [131019340050] |
  • What is the purpose of "Filter-Command" ?
  • [131019340060] |
  • Which are the Filter-commands available in Unix?
  • [131019340070] |I have read some books/articles on web, in some books i found few filter commands and in some books i found some other. [131019350010] |I'm not sure what you're asking without context. [131019350020] |"Traditional" Unix tools read from standard input and write to standard output, so you can chain them together using a pipe, which is the | command: [131019350030] |Things which read from standard input and write to standard output are filters. [131019350040] |There is an article here: http://en.wikipedia.org/wiki/Filter_(Unix) [131019360010] |A filter command is almost any command line program on UNIX, really. [131019360020] |Every program that can read from STDINand output to STDOUT can be used as filter. [131019360030] |There are exceptions, though. [131019360040] |One such exception is cpio, which takes a list of files from STDIN to create an archive on output. [131019360050] |There are some commands, that seem not to be able to read from STDIN, though you should check wether those commands use - as file parameter to read from STDIN or write to STDOUT, like cat: [131019360060] |Output f’s contents, then standard input, then g’s contents. [131019360070] |But even when your program does not use that, you can still usually force a program to act as a filter: [131019360080] |For instance take wget and you want to make that program output to STDOUT: [131019360090] |i.e.: You can use /dev/stdin, /dev/stdout, and /dev/stderr, as files to force a program to read, or output into the standard-IO descriptors. [131019360100] |Another side note: The length of your pipe, can be as long as you wish, so you can pipe from one program to another, basically making it a long chain of filters: [131019370010] |Tricks and tips for finding information in man pages [131019370020] |Does anyone have any tricks and tips for finding information in man pages? [131019380010] |man -k search [131019380020] |This will give you a list of all man pages which relate to 'search'. [131019390010] |From Kristof answer, if you (i.e.) type man -k chmod you'll get a list of possibilites. [131019390020] |Note the number in the parenthesis, it means the section to look for in the manual pages: [131019390030] |On UNIX you can try: [131019390040] |man -s1 chmod it will show the man page for chmod command [131019390050] |man -s2 chmod it will show the man page for the C lib function chmod() [131019390060] |On Linux you should change -s for -S [131019400010] |Type slash / and then type the string to search for. [131019400020] |Then keep pressing n to get to the next item [131019410010] |Pay attention to the section number: Suppose you want help on printf. there are at least two of them: in shell and in C. [131019410020] |The bash version of printf is in section 1, the C version is in section 3 or 3C. [131019410030] |If you don't know which one you want, type man -a printf, and all manual pages will be displayed. [131019410040] |If what you are looking for is the format of printf with all % codes and it doesn't appear on printf man page, you can jump to related man pages listed under SEE ALSO paragraph. [131019410050] |You may find something like formats(5), which suggests you to type man 5 formats. [131019410060] |If you are annoyed that man printf gives you printf(1) and all you want is printf(3), you have to change the order of scanned directories in the MANPATH environment variable and put the ones for C language before the ones for shell commands. [131019410070] |This may happen also when Fortran of TCL/Tk man pages are listed before C ones. [131019410080] |If you don't know where to start, type man intro, or man -s
    intro. [131019410090] |This gives you a summary of commands of requested section. [131019410100] |Sections are well defined: [131019410110] |
  • 1 is for shell commands,
  • [131019410120] |
  • 2 is for system calls,
  • [131019410130] |
  • 3 is for programming interfaces (sometimes 3C for C, 3F for Fortran...)
  • [131019410140] |
  • 5 is for file formats and other rules such as printf or regex formats.
  • [131019410150] |Last but not least: information delivered in man pages is not redundant, so read carefully from beginning to end for increasing your chances to find what you need. [131019420010] |The apropos utility is seriously handy for finding the appropriate manpage. [131019430010] |Always check out what's in the SEE ALSO section. [131019430020] |Frequently I find other useful commands or functions that way. [131019440010] |Don't ignore the info pages. [131019440020] |Many GNU tools have far more extensive info pages than man pages. [131019440030] |Often, the SEE ALSO section will say "The full documentation for foo is maintained as a Texinfo manual." [131019440040] |This is especially true for anything in the GNU coreutils package. [131019440050] |Also, if you are an emacs user, don't forget you can read info and manual pages without leaving your editor: M-x info and M-x woman. [131019450010] |If you're more comfortable with your editor than you are with the default pager, you can set MANPAGER in your environment. [131019450020] |For example, I have this line in my ~/.bashrc: [131019450030] |export MANPAGER="col -b | vim -c 'set ft=man nomod nolist ignorecase' -" [131019460010] |As @Steven D says, don't forget the info pages. [131019460020] |In addition, don't be intimidated by the info pages. [131019460030] |I know plenty of people who don't use the info pages because of the built-in navigation system. [131019460040] |My favorite solution is to pipe the info pages through less: [131019460050] |This way, I can navigate the info pages using my favorite pager. [131019460060] |The info pages will now behave the same as man pages. [131019470010] |Most of us set the PATH variable. [131019470020] |This will show you how to automatically make the man search path match your command search PATH. [131019470030] |Say you append your path to include your personal, work-specific and locally-installed utilities, like export PATH=$PATH:~/bin:/workgroup/bin:/opt/local/bin:. [131019470040] |As a side effect, man foo won't show the manpages stored at ~/man , /workgroup/man or /opt/local/man . [131019470050] |To resolve this, I use the manpath command to automatically set the man page search path. [131019470060] |For example, my ~/.bashrc has the following. [131019470070] |This works for me on a hundred different systems running everything from FreeBSD 4.x, Darwin and CentOS 5: [131019470080] |Some systems (Like Apple Leopard) set the MANPATH automatically, but that means that your system will use the MANPATH variable instead of using manpath. [131019470090] |As a result, man pages for 'MacPorts' (/opt/local/man) are ignored. [131019470100] |I want to control this myself, so I unset MANPATH: [131019480010] |The default pager for reading a man page is less. [131019480020] |There is documentation on less here. [131019480030] |In particular: [131019480040] |
  • Scroll up/down by one page: b/space
  • [131019480050] |
  • Scroll up/down by half a page: u/d
  • [131019480060] |
  • Searching forwards/backwards: / or ?, then type a regular expression, then then hit n to go to the next match or shift-n to go to the previous match. [131019480070] |Add an @ before the regular expression to search from the start.
  • [131019490010] |Similar to but slightly different from Rob Hoelz's answer, [131019490020] |Add the following in your ~/.vimrc: [131019490030] |Now vimman is an excellent manpage viewer, and :Man from within Vim (or simply hitting K over a keyword) is an excellent manpage browser. [131019500010] |In place upgrade of a software raid 5 array [131019500020] |I run a software raid array for my backups, but my data has outgrown capacity. [131019500030] |considering I have a full 2.4TB array with 5*600GB drives and also have 5*2TB drives I would like to swap in. [131019500040] |What would be the nicest way to upgrade the array? [131019500050] |I thought of faulting 1 drive at a time and swapping in a new drive and rebuilding, but I am not sure if at the end of the process I will be able to resize the array [131019500060] |Thoughts? [131019510010] |On hardware RAID controllers, rebuilding an array with larger disks won't result in a larger array. [131019510020] |Previous times, I created new arrays next to the old ones. [131019510030] |My last upgrade plan was: [131019510040] |
  • copy the data on 2 disks (as extra back-up)
  • [131019510050] |
  • Build a new array with the remaining larger disks (RAID 5 will still give you a larger array then the last one)
  • [131019510060] |
  • Move the data to the new array
  • [131019510070] |
  • Remove the old array
  • [131019510080] |
  • Grow the new array with the 2 extra disks
  • [131019520010] |Assuming this is linux, this is doable and pretty easy actually. [131019520020] |It is covered on the software raid wiki but the basic steps are: [131019520030] |
  • Fail and remove drive.
  • [131019520040] |
  • Replace with a larger drive.
  • [131019520050] |
  • Partition the drive so the partitions are the same size or larger than the ones in the existing software raid partition.
  • [131019520060] |
  • Add the partitions to software RAID and wait for it to sync.
  • [131019520070] |
  • Repeat above steps until all drives have been replaced.
  • [131019520080] |
  • mdadm --grow /dev/mdX --size=max to resize the mdadm device.
  • [131019520090] |
  • resize2fs /dev/mdX to resize the file system assuming you have ext3.
  • [131019520100] |You can grow the mdadm device and the file system while the server is live too. [131019520110] |If your drives are hot swappable you can do everything without downtime. [131019530010] |Emacs sync w/ Google Calendar and Contacts?? [131019530020] |Is there a way to use Emacs to sync with Google Calendar and Google Contacts, ideally keeping a local copy so I can access them offline? [131019530030] |Thanks! [131019540010] |Unfortunately, I am unable to give a complete answer. [131019540020] |All I have is advice about some possible paths to wander down. [131019540030] |The easiest route would be if the emacs-g-client that Gilles mentioned in the SU version of this question works. [131019540040] |If that doesn't work, I would look into the following: [131019540050] |
  • At the very least you should be able to get some calendar functionality by accessing your google calendar using ical. [131019540060] |The function icalendar-import-file can import an ical file to a emacs diary file (icalendar-import-file documentation). [131019540070] |Thus, in your .emacs file you could have a bit of emacs lisp to get the google calendar ical file and import it into your diary. [131019540080] |If you do end up using org-mode there are a number of ways to integrate org-mode with diary-mode.
  • [131019540090] |
  • I think that the ultimate goal would be to make use of the gdata api. [131019540100] |I don't think that there is an easy way to get access to Google contacts outside of this api. [131019540110] |There is a command line utility that supports a wide range of functionality using this api called Google CL, which could theoretically be used inside some emacs lisp functions to provide full access to your contacts, calendar, and many other Google-hosted services. [131019540120] |This however, would likely be much more difficult than just a few lines thrown into your .emacs.
  • [131019550010] |I'm not sure, but I think that was talked about and successfully done by one of the folks on the org-mode mailing list (emacs-orgmode@gnu.org iirc). [131019560010] |For Google Calendar, I have a one way sync setup successfully. [131019560020] |Emacs fetches my calendars at startup and transfers it in the emacs diary. [131019560030] |This is then displayed by org-mode in the agenda, but you can set it up anyway you want. [131019560040] |For sending back to Google Calendar, I have yet setup anything as I don't need it that much. [131019560050] |However, I think it would be pretty easy to have a function that adds an entry in the diary and calls googlecl to add an entry in your google calendar. [131019560060] |To fetch the calendars, I have the following in my .emacs (not that this is not my code, it comes from the org-mode mailing list, but I can't remember where I found it exactly): [131019560070] |Replace "http://www.google.com/calendar/ical/DFSDFSDFSDFASD/basic.ics" with the urls to the calendars you want to fetch (you find it at the bottom of the setup page of each calendar in google calendar). [131019560080] |You can add as many as you wish. [131019560090] |Now, you can just call (getcals) when you want to fetch the calendars. [131019560100] |You can put this in your .emacs to do it at startup, but it might stall your startup. [131019560110] |To have org-mode display the diary entries in the agenda, just add (setq org-agenda-include-diary t) in your .emacs. [131019560120] |See the org-mode manual for details. [131019570010] |What makes Ubuntu not totally Free Software?? [131019570020] |I heard that Ubuntu is not completely free (as in Freedom). [131019570030] |What are the specific parts of Ubuntu that are not Free? [131019570040] |Thanks! [131019570050] |P.S. [131019570060] |This is just out of curiosity, not meant to spark a debate on Free Software, etc. [131019580010] |For one thing, it uses closed source hardware drivers, which aren't regarded "free" the GNU way. [131019580020] |That's one of the reasons, why some drivers aren't supported on Fedora. [131019580030] |There's different kinds of "free" in the Linux world. [131019580040] |Closed source is what makes a Distro not GPLv2 compatible, which clearly requires addition of all the source codes. [131019590010] |Assuming you mean "Free as in Freedom" rather than "free as in beer" (see this essay for one description of the difference between the two), a person claiming that Ubuntu is not free may be referring to one of the following issues: [131019590020] |
  • Binary blobs in the Linux kernel (this is often firmware that is needed to let a free driver work).
  • [131019590030] |
  • Non-free hardware drivers.
  • [131019590040] |
  • Non-free software that is in the Ubuntu repositories, such as flash.
  • [131019590050] |
  • The inclusion of Ubuntu One, which is a free client for a non-free web server provided by Canonical.
  • [131019590060] |Sometimes, they may be referring to the inclusion of software that poses legal problems in the US because of patents or other issues; however, such issues are usually orthogonal to the software being free. [131019590070] |However, it is more than possible to have a completely free system using Ubuntu. [131019590080] |The vrms package in the Ubuntu repository is a good first step if you are concerned with non-free packages that are installed on your system. [131019590090] |If you want to go even further, you can consider using Linux Libre a version of the Linux kernel that has non-free binary blobs removed from it. [131019590100] |Note, however, that installing linux libre will break your support for any hardware that needs those non-free bits. [131019590110] |I personally find it "free enough" to ensure that I don't have any non-free packages installed and tend not to worry about binary blobs. [131019590120] |But each person tends to draw "the freedom line" in a different place. [131019600010] |and it can be easily installed without the non-free bits by pressing F6 and selecting "Free software only" before installing. [131019610010] |Is btrfs stable enough for home usage? [131019610020] |btrfs has finally found its way into the latest kernels, is it considered stable and safe enough to use in a home backup scenario (as an alternative to zfs) ? [131019620010] |No, and while fuse-ZFS is the bee's knees (having tried it) I wouldn't use it either. [131019620020] |It's not a stability issue - both are fairly stable - but one of code maturity. [131019630010] |The roadmap for btrfs in Ubuntu is to have it as the default filesystem by 12.04 LTS. [131019630020] |The likely cutover to default will be 11.04, other distributions may have more or less aggressive plans, but watching them is your best cue to the perceived stability and reliability and performance of the code. [131019640010] |What do you mean by "home backup scenario"? [131019640020] |If you mean system that is backuped regulary and you can afford lost of some work (btrfs is only fs for /home without crutial data) I'd say you can try it if you feel very adventorous. [131019640030] |If you mean fs that underlaying backup you probably need rock-stable filesystem - like ext3/4 with ultra conservative options (Your millage may vary which filesystem). [131019640040] |Neither zfs-fuse nor btrfs are stable enough. [131019640050] |If you mean that snapshots are your "backup" method - then you have no backup. [131019640060] |Probably you should read gotchas and ask youself at each point "can I afford it happening?". [131019650010] |From my /etc/fstab: [131019650020] |So, in a way, yes, it is. [131019650030] |I haven't had a single problem with it. [131019650040] |However, I reinstall often (I use Fedora, so I install a new release twice a year) and: [131019650050] |my /home is not on btrfs. :) [131019650060] |Testing new stuff. [131019650070] |It's a dirty job, but somebody has to do it. [131019660010] |Tips for putting ~ under source control [131019660020] |I want to put my home directory (~) under source control (git, in this case), as I have many setting files (.gitconfig, .gitignore, .emacs, etc.) in there I would like to carry across machines, and having them in git would make it nice for retrieving them. [131019660030] |My main machine is my MacBook, and the way that OS X is set up, there are many folders i want to ignore (Documents, Downloads, .ssh). [131019660040] |There are also folders which are already using git (.emacs.d). [131019660050] |My thought was to just add all these directories to my .gitignore file, but that seems kind of tiresome, and could potentially lead to some unforeseen consequences. [131019660060] |My next thought was to periodically copy the files I want to store into some folder in home, then commit that folder. [131019660070] |The problem with that will be that I have to remember to move them before committing. [131019660080] |Is there a clean way to do this? [131019670010] |I think your second hunch to have non-related folder under source control is good. [131019670020] |Just add 2 shell scripts there. [131019670030] |One to copy files under your control to ~ and the other to collect files from ~ and copy it back to the source-controlled folder and commit. [131019680010] |I use the old rcs for that. [131019680020] |Have a look at manpages for ci, co, and rcs. [131019680030] |Those sites should be helpful as well: [131019680040] |
  • http://www.jodypaul.com/SWE/RCSTutorial/RCSTutorial.html
  • [131019680050] |
  • http://docstore.mik.ua/orelly/perl/sysadmin/appa_01.htm
  • [131019680060] |
  • http://tldp.org/HOWTO/RCS.html
  • [131019680070] |I use that for version controlling my dotfiles for instance: [131019680080] |And if I want to edit them: [131019680090] |I recommend making a directory named RCS in your ~, you can then easily backup that directory somewhere. [131019690010] |A web search for git home directory brings up a lot of good hits. [131019690020] |In particular, there's a specialized utility called git-home which is probably worth looking at. [131019700010] |I have $HOME under git. [131019700020] |The first line of my .gitignore file is [131019700030] |The rest are patterns to not ignore using the ! modifier. [131019700040] |This first line means the default is to ignore all files in my home directory. [131019700050] |Those files that I want to version control go into .gitignore like this: [131019700060] |A trickier pattern I have is: [131019700070] |That is, I only want to version .ssh/config - I don't want my keys and other files in .ssh to go into git. [131019700080] |The above is how I achieve that. [131019700090] |Edit: Added slashes to start of all paths. [131019700100] |This makes the ignore patterns match from the top of the repository ($HOME) instead of anywhere. [131019700110] |For example, if !lib/ was a pattern (dont ignore everything in the lib directory) and you add a file .gitignore, previously the pattern (!.gitignore) was matching that. With the leading slash (!/.gitignore), it will only match .gitignore in my home directory and not in any subdirectories. [131019700120] |I haven't seen a case where this makes a practical difference with my ignore list, but it appears to me to be more technically accurate. [131019710010] |What I do (with the same objectives) is to put my configuration files in a subdirectory ~/lib and have symbolic links in my home directory, e.g., .emacs -> lib/emacs/dot.emacs. [131019710020] |I only keep configuration files that I wrote explicitly under version control; my home directory contains plently of automatically-created dot files that are not under version control. [131019710030] |Thus ~/lib is under version control, and my home directory is not. [131019710040] |I have a script that creates the symbolic links from the files under ~/lib. [131019710050] |When I create an account on a new machine, I populate it by checking out ~/lib and running that script. [131019710060] |My experience is with CVS, not git, so it's not 100% transferable. [131019710070] |One of the reasons I didn't put my home directory directly under CVS is that ~/.cvsignore would apply to all my CVS checkouts and not just my home directory; git doesn't have this problem. [131019710080] |The downside of that approach compared with having the home directory under version control is that you can't use git status to distinguish between a file that you've explicitly decided to ignore (which would be listed in the ignore file, so not displayed) and a file that you have no opinion about (which would be displayed with a ?). [131019710090] |Some files need to be different on different machines. [131019710100] |I put them in a directory called ~/Local/SITENAME/lib and either create symbolic links for them as well or (for file formats that support it) have an include directive in the file under ~/lib. [131019710110] |I also have a symbolic link ~/Here -> ~/Local/SITENAME. [131019710120] |Since git, unlike CVS, is designed to support mostly-similar-but-not-identical repositories, there may be a better way to manage machine-specific files. [131019710130] |A few of my dot files are in fact not symbolic links, but automatically generated from content under ~/lib and ~/Here. [131019720010] |You could try using dropbox [131019720020] |http://www.dropbox.com/ [131019720030] |That's what I use. [131019720040] |You could potentially mount /home to your dropbox folder. [131019720050] |But I just have it in /home/dropbox and save all my settings and stuff to there. [131019720060] |That way you don't have to svn pull or anything like that, the stuff is automatically synced [131019720070] |Source control means, amongst other things, that you have change logs, that you can commit individual files and see differences between commits, that the version history is kept forever [131019720080] |I just have to point that dropbox does everything on that list. [131019720090] |It's not the BEST for all of those. [131019720100] |But doing all of that is possible with dropbox [131019730010] |vcs-home probably has information that you would be interested in. [131019740010] |iptables does not list rules i have created [131019740020] |I'm using this guide to set-up a shared internet connection between two PC's. [131019740030] |At step 8 it says I should run the commands: [131019740040] |Doing this seems to have no effect on iptable's rules, if I run iptables -nvL my output is: [131019740050] |Is that correct or am I doing something wrong? [131019750010] |The command iptables -nvL is displaying the contents of the filter table. [131019750020] |The rule you are adding is in the nat table. [131019750030] |Add -t nat to look at the nat table: [131019760010] |Sftp interface to Scp [131019760020] |I have a linux based router that doesn't have the SFTP server installed. [131019760030] |More specifically, when I sftp user@ipaddress I get a sh: /user/libexec/sftp-server: not found error. [131019760040] |My interest isn't trying to resolve this error by installing a new package ("distro" is specific to the router, and I'm not interested in trying to modify it). [131019760050] |What I am interested in is finding a command line utility that works like sftp but uses scp as the transfer mechanism. [131019770010] |You can use fish (files transferred over shell protocol). [131019770020] |There are various client implementations, but none require any server support beyond regular SSH. [131019780010] |I use Veam Fast SCP for this. [131019780020] |It's very handy when dealing with ESXi, since that only has scp enabled, but it will also deal with any scp-enabled machine. [131019790010] |Perhaps I'm confused, but what do you mean by "sftp like interface"? [131019790020] |Just get/put files using scp from the command line. [131019790030] |To put file foo, from the command line on your linux host assuming 'username' exists as a user on the router: [131019790040] |$scp foo username@router:~ [131019790050] |This will copy file foo to the home directory of username. [131019790060] |To get a file from the router, assuming the file is in the home directory of user 'username': [131019790070] |$scp username@router:~/foo . [131019790080] |This will copy file foo from the router to whatever directory you're in when you execute the command. [131019790090] |I hope I understood your question correctly. [131019790100] |Good luck. [131019800010] |A client program like filezilla can do this as well... the connection type you specify is 'sftp' which is misleading, since it actually doesn't require an sftp server, just ssh access to a machine. [131019810010] |XForwarding Applications from OSX [131019810020] |Using XForwarding, you can access GUI applications over ssh between two Xorg-powered machines (and sometimes even from a windows machine). [131019810030] |Is there a way to access OSX applications (like Finder) from an Xorg machine? [131019820010] |Finder doesn't use X APIs, so can't be forwarded over over ssh like that. [131019820020] |Same with most mac applications; apple have their own windowing system called Aqua. [131019820030] |Sharing the desktop via VNC or apple remote desktop works fine though -- look in the "Sharing" preference pane for the "Screen Sharing" option to set it up on the mac, then use a vnc client on the other machine. [131019830010] |need help booting the machine using the Ubuntu 10.04 installer CD [131019830020] |
  • I am trying to install Ubuntu 10.04 on my machine which already has Windows Vista and Fedora installed.
  • [131019830030] |
  • I use GRUB to get the boot menu. [131019830040] |The GRUB screen looks something like this (has a fedora logo at the bottom) [131019830050] |
  • The problem is that when I insert the CD and try to boot, it takes me directly to the GRUB menu for Fedora/Vista, nothing for Ubuntu.
  • [131019830060] |
  • So, I tried Ubuntu's CD boot helper to help me boot from the CD, and I get this error [131019830070] |
  • The BIOS says that I should press F2 for setup and F12 for boot options. [131019830080] |I tried them. [131019830090] |Nothing happens except that it goes straight to the GRUB menu.
  • [131019830100] |What should I do? [131019830110] |Originally posted here, did not get a solution till now. [131019840010] |The Ubuntu Boot CD Helper can't write to the MBR of the disk. [131019840020] |Though I have never used it, I can assume this is because you don't have a Windows bootloader installed there, you have grub. [131019840030] |Chances are that you are using a usb keyboard (please do specify) and USB Legacy Support is not enabled in your mobo. [131019840040] |This renders it unusable until the OS loads the appropriate drivers for the USB ports. [131019840050] |A 'quick' way to work around this is to plug in a ps2 keyboard (if you have one). [131019840060] |Alternatively, you can set grub to boot from the cdrom drive. [131019840070] |Boot into Fedora, [131019840080] |~# sudo vi /boot/grub/menu.lst [131019840090] |And following from the last line, add [131019840100] |title CDROM [131019840110] |root (hd0,0) (this will be subject to your schema, you will need to place the appropriate device name here) [131019840120] |kernel /boot/grub/memdisk.bin [131019840130] |initrd /boot/grub/sbootmgr.dsk [131019840140] |See if that works. [131019840150] |Another alternative would be to restore the boot sector from windows and then use the Ubuntu CD Boot Helper. [131019840160] |Haven't done it myself on Windows7, but if I recall corretly, it involved running [131019840170] |fixmbr fixboot [131019840180] |This will, of course, reset your boot sector to a state where you can only boot windows. [131019840190] |But upon being able to install Ubuntu, you will be able to add an entry for Fedora too. [131019840200] |Cheers! [131019840210] |(And apologies about the poor formatting, I am still looking into it). [131019850010] |You should be able to just modify your BIOS boot order settings to make the BIOS check the CD-ROM drive before the hard disk. [131019850020] |It's that simple, usually. [131019860010] |It was a stupid problem. [131019860020] |My machine has function keys that need to be pressed in combination of Fn key (opposite to what is there in most laptops). [131019860030] |So, I was trying just that (Fn+F2, Fn+F12). [131019860040] |Turns out while booting, you need to press F2 without the Fn key.