[131064210010] |Sharing Mac Snow Leopard directory via NFS [131064210020] |I am running a Mac Pro using Snow Leopard (not server). [131064210030] |I would like to share a directory with a linux machine. [131064210040] |I have edited /etc/exports on the Mac to: [131064210050] |When I try to mount this from a linux machine, I get: [131064210060] |Also, this is what I get for showmount -e on the Mac: [131064210070] |In other words, the mount is not seen, apparently. [131064210080] |Any suggestions? [131064210090] |I have not found much good documentation on sharing via NFS on the Mac, particularly for later OS versions. [131064220010] |Bad form, I know, to answer my own question, but.... [131064220020] |I needed a couple more steps, outlined here. [131064220030] |In short, I needed to execute: [131064220040] |As another detail, I added the client name to the export and removed the "-rw" flag. [131064230010] |need to install mercurial [131064230020] |Possible Duplicate: How do I upgrade python on openSUSE? [131064230030] |Hi If you look at my previous question you will find this one as same as that but as I couldn't solve my problem yet I want to ask again, I did tried all your good suggestions but there were no mercurial package in opensuse 10.2(my linux) please give me another sollution which doesn't need packages I'm completely confused and I just installed opensuse 11.1 but it had the same problems(also I'm not familiar with this version and even couldn't find yum) :( please give me some suggestions for a begginer in Linux, any help to make my program work. [131064230040] |Any help would be appreciated. [131064230050] |Bests [131064240010] |Ideal Hardware for GNU/Linux Laptop [131064240020] |What's the best way to find a laptop with hardware that is amenable to installing and running GNU/Linux? [131064240030] |I'm looking for something that will ideally work with GPL'd drivers and/or firmware. [131064240040] |I took a quick look at linux-on-laptops.com, but they don't list some of the newer models for Lenovo, for example. [131064240050] |Most of the test cases seem to be quite dated. [131064240060] |Also, It's not clear where to start looking given so much information. [131064240070] |Unfortunately, the FSF website doesn't list many laptop possibilities. [131064240080] |LAC, referenced from the FSF website, doesn't mention wireless connectivity as being a feature of their laptops, probably because of firmware issues. [131064240090] |I've been looking for a laptop that will accept the ath9k driver because those cards don't require firmware, but getting model type from generic specs pages is not always possible. [131064240100] |Searching for lspci dumps online can be can be a roll of the dice. [131064240110] |And then there's the issue of what kind of graphics card is ideal from a FSF perspective. [131064240120] |From the FSF website: [131064240130] |This page is not an exhaustive list. [131064240140] |Also, some video cards may work with free software, but without 3D acceleration. [131064240150] |This information should be considered very tentative -- we are doing our best to update it. [131064240160] |Does anyone know the details of what works and what doesn't? [131064240170] |What good combinations are out there? [131064240180] |What do you use? [131064240190] |Where can I search? [131064240200] |Or will I have to bite the bullet and use some proprietary packages? [131064240210] |Thanks in advance. [131064250010] |If you want to push the freedom exigence as far as possible, you would also want a coreboot BIOS. [131064250020] |The best (only?) option, in this case, is RMS's laptop: a Lemote YeeLoong. [131064250030] |It is however rather small (either 8.9'' or 10'') and underpowered but very cheap. [131064250040] |Check http://www.tekmote.nl/epages/61504599.sf/nl_NL/?ObjectPath=/Shops/61504599/Categories/"Lemote linux PC and Linux laptops" [131064250050] |When it comes to choosing a video card, go Intel. [131064250060] |A Free (as in Freedom) driver AND firmware and you will have 3D acceleration. [131064260010] |Linux compatible hardware [131064260020] |There is a list of notebooks there, too. [131064270010] |speedyx from identi.ca mentioned santech.it, system76 and ZaReason. [131064270020] |I emailed support from ZaReason and I got a reply that you can request an Atheros wifi card that uses the ath9k driver; soon they will be offering it as an option on their website. [131064270030] |The only issue that remains in this context is the non-free BIOS, wich I can probably live with. [131064270040] |(Probably.) :) [131064280010] |https://wiki.ubuntu.com/HardwareSupport/Machines/Netbooks [131064280020] |this might be helpful for you [131064290010] |How to fix icedteanp plugin error? [131064290020] |I'm using openSUSE 11.3 with "latest updates". [131064290030] |I have installed vimprobable and uzbl. [131064290040] |Whenever I start one of those two browsers, I get the error message: [131064290050] |icedteanp plugin error: Failed to run etc/alternatives/../../bin/java. [131064290060] |For more detail rerun "firefox -g" in a terminal window. [131064290070] |After some time (about 1 minute), the message disappears by itself (or when I click the "close" button). The browser(s) continue loading and again, this error appears on specific sites. [131064290080] |What can I do to fix this problem? [131064290090] |Thank you. [131064300010] |Your java alternative is not configured properly, the webbrowser cannot find the binary. [131064300020] |Update your alternatives: [131064300030] |This should output only one (with a path) or multiple alternatives to choose from. [131064300040] |If only one, check if the path exists, else select your alternative. [131064300050] |Try again opening a page containing java elements. [131064300060] |Still the same error? [131064300070] |First, find out what provides java: [131064300080] |If this returns nothing, you have to install java first. [131064300090] |Otherwise, check where exactly the binary lies: [131064300100] |This should show the java binary. [131064300110] |Now update your alternatives to refer to that binary: [131064310010] |How to find out which (not installed) package a file belongs to? [131064310020] |In the Debian family of OSes, dpkg --search /bin/ls gives: [131064310030] |That is, the file "/bin/ls" belongs to the Debian package named coreutils. [131064310040] |But this only works if the package is installed. [131064310050] |What if it's not? [131064320010] |The standard tool for this is apt-file. [131064320020] |Run apt-file update to download the index file. [131064320030] |Here's the output: [131064320040] |After that, run apt-file search search_term. [131064330010] |apt-file [131064330020] |apt-file provides the feature of searching for a package providing a binary (like Debian or Ubuntu), it is not installed by default but in the repositories. [131064330030] |For example, let's search for the not installed binary mysqldump: [131064330040] |It's also possible to list the contents of a (not-installed) package: [131064330050] |yum [131064330060] |yum accepts the command whatprovides (or provides) to search for installed or not installed binaries: [131064330070] |Again, the not installed mysqldump: [131064330080] |zypper [131064330090] |opensuse'szypper does include the command what-provides and concerning the manual, this should also search through the contents of not installed packages. [131064330100] |Apparently, it does not work like intended. [131064330110] |There is a request for this feature. [131064330120] |Webpin provides a webbased solution, there is even a script for the command-line. [131064330130] |pkgfile [131064330140] |Available as pkgtools for pacman based systems. [131064330150] |Provides a similar search feature like the others above: [131064340010] |How to build-compile a .c file.... [131064340020] |Hello... [131064340030] |I have little experience with LINUX . [131064340040] |I am using Debian. [131064340050] |It has a library glibc which has several useful programs. iconv is the program that i want to use to do several charset conversions... [131064340060] |However i want to create my charset to use it at a really old from 1978 dot matrix heavy duty printer which has a custom charset for my language... [131064340070] |Capital latin /greek only 7 bit So i should write my own module for iconv... [131064340080] |I found how to configure it and make it but i dont know how to do the compilation.. from source .c file to .so file.... [131064340090] |If there is no solution i will try to build from source the libc libray and copy the .so file that it should generate (think so)... [131064350010] |A quick search led me to this HOWTO. [131064350020] |I have no experience with this, but your command will look similar like this: [131064360010] |Implementing a Unix shell in C: Logging [131064360020] |Hello everyone. [131064360030] |I am currently working on implementing my own UNIX shell in C. [131064360040] |The basis of the shell is working, you could find the source code here. [131064360050] |Right now my next task is to implement a logging feature for the shell. [131064360060] |So here's my question: [131064360070] |What are the UNIX/Linux best practices to do that? [131064360080] |The simple way would be to open a file and write to it every time. [131064360090] |But I was wondering if there is a function that would take care of that for me, helping me produce ... "cleaner"... code. [131064360100] |Thank you in advance for your help, I'm quite new to programming, but I want to do things right. [131064360110] |PS: If you have any general remarks about the rest of the code, they'd be greatly appreciated :-) [131064370010] |Common practice for logging is to use syslog. [131064370020] |It's quite simple and does not require much code. [131064370030] |If you need something else - look at logging libraries like liblogging or log4c [131064380010] |Typically command history is saved to a hidden file in the user's home directory. [131064380020] |Other than that, you can log to stderr or syslog. [131064390010] |Can I keep applications running across X sessions? [131064390020] |Let's say I have a Firefox window downloading a big file that will take a lot of time. [131064390030] |Now that it's running halfway and I want to switch to another DE (for example from GNOME to KDE), can I do that without interrupting the download? [131064390040] |EDIT: I'm not using GDM or KDM or any desktop manager, which may make the situation more difficult... [131064400010] |Yes you can. [131064400020] |If you use GNOME, click on System->Log Out username, which will bring this dialogue: [131064400030] |Clicking on user Switch User takes me to a gdm window, which asks me which user I want to switch to. [131064400040] |Once you are done with the login, you can always switch back to the original user with either CtrlAltF7 or by using the same sequence of commands I stated above. [131064410010] |You have to leave the original X server running. [131064410020] |You can start another X server on another tty. [131064410030] |So, on a typical system, do ctrl+alt+f1, then log in and run startx -- :1. [131064410040] |You should end up with another X session on reachable by ctrl+alt+f8. [131064410050] |Any number of X servers can be started by changing the number after the colon; I don't know how to get to ones after you run out of f-keys. [131064410060] |If you want, you can setup special .xinitrc files that start different desktop environments. [131064410070] |So you might have a .xinitrc-kde that starts a KDE session. [131064410080] |In that file, you'd have something like exec startkde. [131064410090] |And you'd start X like by doing startx ./.xinitrc-kde -- :1. [131064410100] |If you plan on running Firefox on both the sessions, there may be some issues. [131064410110] |Look into the "no-remote" and "ProfileManager" command line options for Firefox. [131064420010] |If you've planned in advance that you want to access one application from several different X sessions, you can run it inside a virtual X server: the application displays inside the virtual X server, and the virtual X server appears as a window inside any number of real X servers. [131064420020] |One possibility for the virtual X server is VNC. [131064420030] |Start the vncserver program; this creates a virtual X server and runs ~/.vnc/xstartup, which typically runs ~/.xinitrc like startx. [131064420040] |Then call xvncviewer to show a window containing the virtual X server's display. [131064420050] |The virtual server keeps running until the session exits or you run vncserver -kill; you can connect and disconnect viewers at will. [131064420060] |You may need to specify a display number on the command line, e.g. vncserver :3 and xvncviewer :3. [131064420070] |VNC sessions can be accessed from different machines if no firewall gets in the way: xvncviewer somehost:3. [131064420080] |There are multiple implementations of VNC servers and viewers. [131064430010] |You can start a nested X server (Xnest or better Xephyr) and launch another graphic environment in it. [131064440010] |Nested ssh session inside screen has no color [131064440020] |I am attempting to keep a nested ssh session inside of a byobu/screen window, which I also connect to over SSH with Putty on Windows. [131064440030] |However the nested SSH session has no color. [131064440040] |Here's what I'm talking about: [131064440050] |On the left you have byobu connected to wreckcreations with no color. [131064440060] |On the right you have Putty connected directly to wreckcreations with color. [131064440070] |Note that normally byobu has color when working locally. [131064440080] |I tried messing with $TERM, $PSI, and other variables to no avail. [131064440090] |Any idea's what would cause this? [131064450010] |It could be many things. [131064450020] |Please provide the output of: [131064450030] |Expected results: [131064450040] |My guess: TERM is set to something unusual, and dircolors doesn't know about it, therefore ls doesn't know what colors to use. [131064450050] |If this is the case, running the above commands inside your byobu/screen session, you would see: [131064450060] |Confirm that this is the case by running: [131064450070] |I would expect it to print nothing. [131064450080] |The simplest fix, depending on your configuration, would be: [131064460010] |Is there a tool that lets me exaggerate audio levels? [131064460020] |It often happens that a movie I'm watching has audio loudness levels set too low. [131064460030] |Since I use a typical laptop, I can't just go raise the volume level of my speakers. [131064460040] |I've raised all knobs offered by GNOME's Volume Control. [131064460050] |Are there tools out there that would allow me to take the loudness to beyond 100% level? [131064470010] |VLC lets you raise volume beyond what it calls 100%, and I assume it does that by an appropriate filter. [131064470020] |Probably mplayer has one, too, but that involves reading the manpage, which you can do for yourself. ;) [131064470030] |So, short answer: look to your player program for a solution. [131064470040] |(Though it's possible that you can globally configure something in ALSA.) [131064480010] |mplayer has -softvol-max option that allows to amplify loudness. [131064480020] |VLC also by default allow you to raise volume up to 200%, but it can be changed in options. [131064490010] |Is SCADA an RTOS? [131064490020] |Is SCADA one of the RTOS out there (it's used for real-time control and data acquisition)? [131064500010] |According to the Wikipedia article, SCADA is a description of a type of system (specifically industrial control systems), not any one particular thing. [131064510010] |SCADA describes "Supervisory Control and Data Acquisition" systems - it often is Real Time, but doesn't have to be. [131064510020] |There is usually a component that is real time (eg for logging, or managine pressures etc in machinery - ie essential heartbeat stuff) but this isn't absolutely necessary. [131064510030] |Many are still legacy code with a TCP/IP front end tacked on, and for some of these the front end is a very basic web server. [131064510040] |From an offtopic security perspective, there is a large scale problem with connecting legacy SCADA apps to the Internet without hardening them appropriately - as this leads to logical attacks on real world systems (such as oil pipelines, power stations etc [131064520010] |What causes various signals to be sent? [131064520020] |I sometimes get a little confused by all of the signals that an application can receive. [131064520030] |As I understand it, an application has a default handler for each of these signals, but it can provide its own handler by calling sigaction(). [131064520040] |So here is my question: what causes each of the signals to be sent? [131064520050] |I realize that you can manually send signals to running applications via the -s parameter to kill, but what are the natural circumstances under which these signals are sent? [131064520060] |For example, when does SIGINT get sent? [131064520070] |Also, are there any restrictions on which signals can be handled? [131064520080] |Can even SIGSEGV signals be processed and control returned to the application? [131064530010] |In addition to processes calling kill(2), some signals are sent by the kernel (or sometimes by the process itself) in various circumstances: [131064530020] |
  • Terminal drivers send signals corresponding to various events: [131064530030] |
  • Key press notifications: SIGINT (please go back to the main loop) on Ctrl+C, SIGQUIT (please quit immediately) on Ctrl+\, SIGTSTP (please suspend) on Ctrl+Z. [131064530040] |The keys can be changed with the stty command.
  • [131064530050] |
  • SIGTTIN and SIGTTOU are sent when a background process tries to read or write to its controlling terminal.
  • [131064530060] |
  • SIGWINCH is sent to signal that the size of the terminal window has changed.
  • [131064530070] |
  • SIGHUP is sent to signal that the terminal has disappeared (historically because your modem had hung up, nowadays usually because you've closed the terminal emulator window).
  • [131064530080] |
  • Some processor traps can generate a signal. [131064530090] |The details are architecture and system dependent; here are typical examples: [131064530100] |
  • SIGBUS for an unaligned access memory;
  • [131064530110] |
  • SIGSEGV for an access to an unmapped page;
  • [131064530120] |
  • SIGILL for an illegal instruction (bad opcode);
  • [131064530130] |
  • SIGFPE for a floating-point instruction with bad arguments (e.g. sqrt(-1)).
  • [131064530140] |
  • A number of signals notify the target process that some system event has occured: [131064530150] |
  • SIGALRM notifies that a timer set by the process has expired. [131064530160] |Timers can be set with alarm, setitimer and others.
  • [131064530170] |
  • SIGCHLD notifies a process that one of its children has died.
  • [131064530180] |
  • SIGPIPE is generated when a process tries to write to a pipe when the reading end has been closed (the idea is that if you run foo | bar and bar exits, foo gets killed by a SIGPIPE).
  • [131064530190] |
  • SIGPOLL (also called SIGIO) notifies the process that a pollable event has occured. [131064530200] |POSIX specifies pollable events registered through the I_SETSIG ioctl. [131064530210] |Many systems allow pollable events on any file descriptor, set via the O_ASYNC fcntl flag. [131064530220] |A related signal is SIGURG, which notifies of urgent data on a device (registered via the I_SETSIG ioctl) or socket.
  • [131064530230] |
  • On some systems, SIGPWR is sent to all processes when the UPS signals that a power failure is imminent.
  • [131064530240] |These lists are not exhaustive. [131064530250] |Standard signals are defined in signal.h. [131064530260] |Most signals can be caught and handled (or ignored) by the application. [131064530270] |The only two portable signals that cannot be caught are SIGKILL (just die) and STOP (stop execution). [131064530280] |SIGSEGV (segmentation fault) and its cousin SIGBUS (bus error) can be caught, but it's a bad idea unless you really know what you're doing. [131064530290] |A common application for catching them is printing a stack trace or other debug information. [131064530300] |A more advanced application is to implement some kind of in-process memory management, or to trap bad instructions in virtual machine engines. [131064540010] |To answer your second question first: SIGSTOP and SIGKILL cannot be caught by the application, but every other signal can, even SIGSEGV. [131064540020] |This property is useful for debugging -- for instance, with the right library support, you could listen for SIGSEGV and generate a stack backtrace to show just where that segfault happened. [131064540030] |The official word (for Linux, anyway) on what each signal does is available by typing man 7 signal from a Linux command line. http://linux.die.net/man/7/signal has the same information, but the tables are harder to read. [131064540040] |However, without some experience with signals, it's hard to know from the short descriptions what they do in practice, so here's my interpretation: [131064540050] |

    Triggered from the keyboard

    [131064540060] |
  • SIGINT happens when you hit CTRL+C.
  • [131064540070] |
  • SIGQUIT is triggered by CTRL+\, and dumps core.
  • [131064540080] |
  • SIGTSTP suspends your program when you hit CTRL+Z. [131064540090] |Unlike SIGSTOP, it is catchable, which gives programs like vi a chance to reset the terminal to a safe state before suspending themselves.
  • [131064540100] |

    Terminal interactions

    [131064540110] |
  • SIGHUP ("hangup") is what happens when you close your xterm (or otherwise disconnect the terminal) while your program is running.
  • [131064540120] |
  • SIGTTIN and SIGTTOU pause your program if it tries to read from or write to the terminal while it's running in the background. [131064540130] |For SIGTTOU to happen, I think the program needs to be writing to /dev/tty, not just default stdout.
  • [131064540140] |

    Triggered by a CPU exception

    [131064540150] |These mean your program tried to do something wrong. [131064540160] |
  • SIGILL means an illegal or unknown processor instruction. [131064540170] |This might happen if you tried to access processor I/O ports directly, for example.
  • [131064540180] |
  • SIGFPE means there was a hardware math error; most likely the program tried to divide by zero.
  • [131064540190] |
  • SIGSEGV means your program tried to access an unmapped region of memory.
  • [131064540200] |
  • SIGBUS means the program accessed memory incorrectly in some other way; I won't go into details for this summary.
  • [131064540210] |

    Process interaction

    [131064540220] |
  • SIGPIPE happens if you try to write to a pipe after the pipe's reader closed their end. [131064540230] |See man 7 pipe.
  • [131064540240] |
  • SIGCHLD happens when a child process you created either quits or is suspended (by SIGSTOP or similar).
  • [131064540250] |

    Useful for self-signaling

    [131064540260] |
  • SIGABRT is usually caused by the program calling the abort() function, and causes a core dump by default. [131064540270] |Sort of a "panic button".
  • [131064540280] |
  • SIGALRM is caused by the alarm() system call, which will cause the kernel to deliver a SIGALRM to the program after a specified number of seconds. [131064540290] |See man 2 alarm and man 2 sleep.
  • [131064540300] |
  • SIGUSR1 and SIGUSR2 are used however the program likes. [131064540310] |They could be useful for signaling between processes.
  • [131064540320] |

    Sent by the administrator

    [131064540330] |These signals are usually sent from the command prompt, via the kill command, or fg or bg in the case of SIGCONT. [131064540340] |
  • SIGKILL and SIGSTOP are the unblockable signals. [131064540350] |The first always terminates the process immediately; the second suspends the process.
  • [131064540360] |
  • SIGCONT resumes a suspended process.
  • [131064540370] |
  • SIGTERM is a catchable version of SIGKILL.
  • [131064550010] |LVM Mirrored Logical Volume Performance [131064550020] |I read that when mirroring a logical volume a log of some sort is used to keep the files in sync. [131064550030] |You can either set it so this log is in a separate physical volume or in memory. [131064550040] |I also read that if the log is in memory, the system has to resync the volumes. [131064550050] |Is there a performance hit for using the log in memory? [131064550060] |For example, does it take much longer to boot the machine? [131064550070] |Is the data at risk while the resync is happening? [131064550080] |I'm working with Ubuntu 10.04 in this case. [131064550090] |I believe it's lvm2. [131064550100] |The clearest documentation I've found is the CentOS document here. [131064550110] |I was also looking at this description. [131064560010] |Can you add the links you are referring to? [131064560020] |Because just mirroring does not need a log. [131064560030] |A log (on the same or extra device) is usually involved when you use a journaling filesystem - if you use mirroring or not on the layer below (i.e. the blocklayer). [131064560040] |Update: Ok, with the links things are clearer now. [131064560050] |The LVM mirroring seems to be quite different from the linux md (RAID 1) mirroring. [131064560060] |To quote from the lvcreate man page: [131064560070] |Specifying the optional argument "--corelog" will create a mirror with an in-memory log verses a disk-based (persistent) log. [131064560080] |While this removes the need for an extra log device and may be slightly faster, it requires that the entire mirror be resynchronized upon each instantiation (e.g. a reboot). [131064560090] |Thus, with a memory based log you get a significant performance hit at startup and a performance hit, when the log physical volume is on the same hardware disk. [131064560100] |Googling around, mirroring using Linux mdadm seems to be the better approach ATM. [131064560110] |(You can use the md device as physical device for some lvm setup.) [131064560120] |First, it does not need an extra log (and does not do an expensinve resync at every startup). [131064560130] |Second, lvm mirrors does not seem to support parallel reading, i.e. md mirrors should have better read performance: [131064560140] |http://serverfault.com/questions/97845/lvm-mirroring-vs-raid1 [131064560150] |http://serverfault.com/questions/126851/linux-lvm-mirror-vs-md-mirror [131064570010] |How do I get my shell functions to make use of existing autocompletion in zsh? [131064570020] |I have a couple of shell functions defined in my .zshrc which save on typing and allow integration with screen. [131064570030] |For example: [131064570040] |However, when I use these shell functions, I can't take advantage of zsh's built in command completion features. [131064570050] |(In the case of SSH, automatically completing hostnames based on known_hosts and ssh_config). [131064570060] |What's the preferred way to hook in to the existing completion mechanisms for commands I've created wrappers for? [131064570070] |EDIT: [131064570080] |Thanks to Gilles for his answer below. [131064570090] |Using compdef appears to be the way to do this, but interestingly it doesn't seem to work uniformly. [131064570100] |In this case, I have the following setup: [131064570110] |Completion for my wrapper function 'm' works as expected, however completion for my function 's' does not; rather than attempting to complete hostnames, it appears to fall back to default autocompletion by providing me a list of files when I hit 'tab'. [131064570120] |Is there some oddity in the way that SSH completion is handled which means I need to do something further? [131064580010] |I use the following function to say that a function or alias or wrapper script (e.g. s) is to be completed like an existing command (e.g. ssh): [131064580020] |Some completion commands apply to a family of functions and read the first word of the command line to determine which particular command to complete. [131064580030] |For example, the commands ssh, scp, sftp and a few more are all completed by the function _ssh. [131064580040] |In that case, you need to tell the completion function which “service” your function is like (by default, the service is the executable name, here your function's name). [131064590010] |What are the pros/cons of deb vs. rpm? [131064590020] |For whatever reasons, I've always used RPM based distros (Fedora, Centos and currently openSUSE). [131064590030] |I have often heard it stated that deb is better than rpm, but when asked why, have never been able to get a coherent answer (usually get some zealous ranting and copious amounts of spittle instead). [131064590040] |I understand there may be some historical reasons, but for modern distros using the two different packaging methods, can anybody give the technical (or other) merits of one vs. the other? [131064600010] |Historically, my impression has always been that the Debian archive was the *.deb format's biggest strength. [131064600020] |Since the advent of yum, that's less true, if it's true at all. [131064610010] |One thing I like about RPMs is the (recent?) addition of delta RPMs. [131064610020] |This allows for easier updating, reducing bandwidth required. [131064610030] |DEBs are standard ar files (with more standard archives inside), RPMs are "proprietary" binary files. [131064610040] |I personally think the former is more convenient. [131064610050] |Just two things I can think off the top of my head. [131064610060] |Both are very comparable. [131064610070] |Both have excellent tools for packaging. [131064610080] |I don't think there are so many merits for one over the other or vice versa. [131064620010] |RPM: [131064620020] |
  • 'Standardized' (not there is no deb spec)
  • [131064620030] |
  • Used by many different distributions (but packages from one does not necessary works on another)
  • [131064620040] |
  • IIRC allows dependency on files not only packages
  • [131064620050] |DEB: [131064620060] |
  • Growing popularity
  • [131064620070] |
  • Allows recommendations and suggestions (possibly newer RPM allows it as well)
  • [131064620080] |Probably the more important question is the package manager (dpkg vs. yum vs. aptitude etc.) then package format (as both are comparable). [131064630010] |Debian packages can include an installed size, but I don't believe RPMs have an equivalent field. [131064630020] |It can be computed based on files included in the package, but also can't be relied upon because of actions that can be taken in the pre/post install scripts. [131064630030] |Here is a pretty good reference for comparison of some specific features that are available for each specific packaging format: http://debian-br.sourceforge.net/txt/alien.html [131064640010] |Main difference for a package maintainer (I think that would be 'developer' in Debian lingo) is the way package meta-data and accompanying scripts come together. [131064640020] |In the RPM world, all your packages (the RPM's you maintain) are located in something like ~/rpmbuild. [131064640030] |Underneath, there is the SPEC directory for you spec-files, a SOURCES directory for source tarballs, RPMS and SRPMS directories to put newly created RPMs and SRPMs into and some other things that are not relevant now. [131064640040] |Everything that has to do with how the RPM will be created, is in the spec-file: what patches will be applied, possible pre and post-scripts, meta-data, changelog, everything. [131064640050] |All source tarballs and all patches of all your packages are in SOURCES. [131064640060] |Now, personally, I like the fact that everything goes into the spec-file, and that the spec-file is a separate entity from the source tarball, but I'm not overly enthusiastic about having all sources in SOURCES. [131064640070] |Imho, SOURCES gets cluttered pretty quick and you tend to loose track of what is in there. [131064640080] |However, opinions differ. [131064640090] |For RPM's it is important to use the exact same tarball as the one the upstream project releases, up to the timestamp. [131064640100] |Generally, there are no exceptions to this rule. [131064640110] |Debian packages also require the same tarball as upstream, though Debian policy requires some tarballs to be repackaged (thanks Umang). [131064640120] |Debian packages take a different approach. [131064640130] |(Forgive any mistakes here: I am a lot less experienced with deb's that I am with RPM's.) Debian packages' development files are contained in a directory per package. [131064640140] |What I (think to) like about this approach is the fact that everything is contained in a single directory. [131064640150] |In the Debian world, it is a bit more accepted to carry patches in a package that are not (yet) upstream. [131064640160] |In the RPM world (at least among the Red Hat derivates) this is frowned upon. [131064640170] |Also, Debian has a vast amount of scripts that are able to automate a huge portion of creating a package. [131064640180] |For example, creating a - simple - package of a setuptool'ed Python program, is as simple as creating a couple of meta-data files and running debuild. [131064640190] |That said, the spec-file for such package in RPM format would be pretty short and in the RPM world, too, there's a lot of stuff that is automated these days. [131064650010] |The openSUSE Build Service (OBS) and zypper are a couple of the reasons I prefer RPM over deb from a packager and user point of view. [131064650020] |Zypper has come a long way and is pretty dang fast. [131064650030] |OBS, although it can handle debs, is really nice when it comes to packaging rpms for various platforms such as openSUSE, SLE, RHEL, centos, fedora, mandriva, etc. [131064660010] |A lot of people compare installing software with apt-get to rpm -i, and therefore say DEB better. [131064660020] |This however, has nothing to do with the DEB file format. [131064660030] |The real comparison is dpkg vs rpm and apitude/apt-* vs zypper/yum. [131064660040] |For a user point of view, there isn't much difference in these tools. [131064660050] |The RPM and DEB formats are both just archive files, with some metadata attached to them. [131064660060] |They are both as arcane, have hardcoded install paths (yuk!) and only differ in subtle details. [131064660070] |Both dpkg -i and rpm -i have no way of figuring out how to install dependencies, except if they happen to be specified at the command line. [131064660080] |On top of these tools, there is a repository management in the form of apt-... or zypper/yum. [131064660090] |These tools download repositories, track all metadata and automate the downloading of dependencies. [131064660100] |The final installation of each single package is handed over to the low-level tools. [131064660110] |For a long time, apt-get has been superior in processing the enormous metadata really fast, where yum would take ages to do it. [131064660120] |RPM also suffered from sites like rpmfind, where you found 10+ incompatible packages for different distributions. [131064660130] |Apt completely hid this problem for DEB packages, because all packages got installed from the same source. [131064660140] |In my opinion, zypper has really closed to gap with apt, and there is no reason to be ashamed of using an RPM-based distribution these days. [131064660150] |It's just as good, if not easier to use with the openSUSE build service at hand for a huge compatible package index. [131064670010] |There is also the "philosophical" difference where in Debian packages you can ask questions and by this, block the installation process. [131064670020] |The bad side of this is that some packages will block your upgrades until you reply. [131064670030] |The good side of this is, also as a philosophical difference, on Debian based systems, when a package is installed, it is configured (not always as you'd like) and running. [131064670040] |Not on Redhat based systems where you need to create/copy from /usr/share/doc/* a default/template configuration file. [131064680010] |From a system administrator's point of view, I've found a few minor differences, mainly in the dpkg/rpm tool set rather than the package format. [131064680020] |
  • dpkg-divert makes it possible to have your own file displace the one coming from a package. [131064680030] |It can be a lifesaver when you have a program that looks for a file in /usr or /lib and won't take /usr/local for an answer. [131064680040] |The idea has been proposed, but as far as I can tell not adopted, in rpm.
  • [131064680050] |
  • When I last administered rpm-based systems (which admittedly was years ago, maybe the situation has improved), rpm would always overwrite modified configuration files and move my customizations into *.rpmsave (IIRC). [131064680060] |This has made my system unbootable at least once. [131064680070] |Dpkg asks me what to do, with keeping my customizations as the default.
  • [131064680080] |
  • An rpm binary package can declare dependencies on files rather than packages, which allows for finer control than a deb package.
  • [131064680090] |
  • You can't install a version N rpm package on a system with version N-1 of the rpm tools. [131064680100] |That might apply to dpkg too, except the format doesn't change as often.
  • [131064680110] |
  • The dpkg database consists of text files. [131064680120] |The rpm database is binary. [131064680130] |This makes the dpkg database easy to investigate and repair. [131064680140] |On the other hand, as long as nothing goes wrong, rpm can be a lot faster (installing a deb requires reading thousands of small files).
  • [131064680150] |
  • A deb package uses standard formats (ar, tar, gzip) so you can inspect, and in a pinch tweak) deb packages easily. [131064680160] |Rpm packages aren't nearly as friendly.
  • [131064690010] |I think the bias comes not from the package format, but from the inconsistencies that used to exist in RedHat's repositories. [131064690020] |Back when RedHat was a distribution (before the days of RHEL, Fedora, and Fedora Core), people would sometimes find themselves in "RPM Hell" or "dependency Hell". [131064690030] |This occurred when a repository would end up with a package that had a dependencies (several layers deep, usually) which were mutually exclusive. [131064690040] |Or it would arise when two different packages had two mutually exclusive dependencies. [131064690050] |This was a problem with the state of the repository, not with the package format. [131064690060] |The "RPM Hell" left a distaste for RPM systems among some population of Linux users who had gotten burned by the problem. [131064700010] |.rpm files have to be created for specific architecture and specific versions of operating system..... such a limitation does not occur with .deb files.... [131064700020] |does this mean people have to spend less time maintaining the package?? [131064710010] |For Debian Packages there is a large set of helper scripts, a consistent policy manual and at least one way of doing almost everything. [131064710020] |Dependencies are handled very well and can be defined in very good granularity. [131064710030] |Re-building packages is very easy with debian packages and well supported by the tools available. [131064720010] |Go search Google for [131064720020] |
  • rpm duplicate packages
  • [131064720030] |
  • dpkg duplicate packages
  • [131064720040] |Read the pages that come back. [131064720050] |Very telling that you can have a messed up RPM database with duplicate packages in it while no such case happens with dpkg. [131064730010] |mount root fs to /target [131064730020] |Hello, [131064730030] |I am installing crunchbang linux (#!) to my eeePC and it is unable to start the disk partitioner. [131064730040] |I traced the problem to partman and partman-lvm that states [131064730050] |No volume groups found. [131064730060] |So I have done some snooping, and I can get around that part of the installer (that just hangs) if I can mount my future root partition to /target and then go from there. [131064730070] |However, I'm having a lot of trouble with the mount command. [131064730080] |I want to mount /dev/sda1 to /target. /dev/sda1 is ext3. [131064730090] |When I try [131064730100] |mount -t ext3 /dev/sda1 /target it states: mount -t ext3 /dev/sda1 /target/ failed: Invalid argument. [131064730110] |To get a place (/target) I simply did mkdir /target. [131064730120] |Perhaps this is not the proper way to do this? [131064730130] |Gracias =) [131064740010] |You're doing it the right way. [131064740020] |It may be that the device /dev/sda1 doesn't exist yet. [131064740030] |You also probably don't need to specify -t ext3 since that should be default. [131064740040] |I don't expect having it would cause any problem though. [131064750010] |Different linux distros sharing the same /home folder? [131064750020] |what is the best way to share the same /home directories between different linux distribution? [131064750030] |I mean, I want to have both , say, Fedora and Arch, and I want to find my files at home no matter which linux distro I boot into. [131064750040] |But if I mount the same partition for /home then I may mess up the configurations saved inside /home directory. so what can I do? [131064760010] |I've only ever upgraded between different Debian versions, and don't remember having issues. [131064760020] |However, note that most packages found in distros are essentially the same, meaning their configs won't vary between. [131064760030] |Any packages unique to a specific distro won't cause a problem. [131064760040] |Often it happens that they change their configurations between upgrades, and may leave the old configs behind. [131064760050] |So go ahead and give it a try, but not before backing up the current configs... just in case. [131064770010] |Much of what makes one distro look different from another, so you might find that they end up with a few aesthetic clashes (same desktop background, for example), but generally I'd say it'll be ok. [131064780010] |I would not recommend sharing your /home between radically different distributions. [131064780020] |Two versions of the same program reading and writing the same config files could result in problems, e.g. if the newer version writes something that the older version does not understand. [131064780030] |If you don't mind the paths being different, save your files in the /home for one distro and mount that /home at another location on the other distro (such as /home//fedora). [131064780040] |Then, /home//foo/bar can be accessed via /home//fedora/foo/bar on arch, for example. [131064780050] |If you want the paths to be the same, save most of your files to a third, distinct partition, and mount it in the same place within both distributions e.g. /home//stuff. [131064790010] |You can use symlinks [131064790020] |On each distro once. [131064790030] |Now each distro has it's own configuration files [131064800010] |You can set the default Documents folder on a different location or partition and the same for other folders, like the Desktop folder, the Download folder and so forth. [131064800020] |Each application has it's own way of using the default paths, so the first time will be a long job... [131064800030] |Some examples [131064800040] |KDE http://docs.kde.org/stable/en/kdebase-workspace/kcontrol/paths/index.html [131064800050] |GNOME http://ubuntuforums.org/showthread.php?t=631711 [131064800060] |If you don't find instruction on how to change defaults for some software you can try it ask here again. [131064800070] |Then there is the hard but intelligent way, that is to setup different distribution on the same PC sharing the same kernel. [131064800080] |I advise you (all) for the sake of the curiosity, to take a look at this article: [131064800090] |http://teddziuba.com/2011/01/multiple-concurrent-linux-distros.html [131064810010] |I'd recommend using symlinks for all the common configuration files you find yourself missing from one to the other. [131064810020] |Create a new directory in a place accessible to both distros, move the files and symlink from there. [131064810030] |Not only does this control exactly what gets shared, but it makes it very easy to move your preferences to other machines, to put them under version control if you need and to back them up. [131064810040] |There are even tools to help you do these things based on the assumption you are working this way (see, for example, homesick). [131064810050] |As far as setting common directories for things such as documents, videos, music etc, there is a standard for this in the form of XDG user dirs, which configures things like desktop, music, images, videos, etc. ( http://freedesktop.org/wiki/Software/xdg-user-dirs). [131064810060] |The directories can be outside your home dir, or you can symlink as you like and set the dirs to point at the symlinks. [131064810070] |I know Gnome works with these and assume KDE does too. [131064810080] |I did try using the entire same home dir in the past, and different versions of applications quickly caused problems. [131064820010] |You can share home directories between distributions, even between different unix variants. [131064820020] |People with home directories shared via NFS on a heterogeneous network do it all the time. [131064820030] |You may run into trouble if you run different versions of some programs on different systems sharing the same home directories. [131064820040] |Troublesome programs are typically the ones with the fanciest GUIs, such as Gnome. [131064820050] |For example Firefox will happily upgrade your profile to a newer version but might not let you load that profile again in an earlier version. [131064830010] |It certainly is possible to share a home folder (or partition) over different linux distributions. [131064830020] |But take the following notes: [131064830030] |
  • UID and GID must be the same on each distributions for the certain user(s).
  • [131064830040] |
  • (as already pointed out) different configuration files for the same programs could result in unexpected behavior.
  • [131064830050] |
  • If you install all distributions onto the same boot folder, make sure that the bootloader handles the different distributions correctly.
  • [131064830060] |I have a working (virtual) setup: [131064830070] |Ubuntu and Fedora both run Gnome 2.30, openSUSE has KDE4 and ArchLinux LXDE. [131064830080] |All distributions have their necessary boot files on one partition. [131064830090] |Switching between the distributions provides a persistent user configuration like intended. [131064830100] |The other possibility would be a lightweight home folder (doesn't have to be a whole partition) for each of the distributions, only providing the necessary configuration files (.gnome2, .kde4, .compiz, .themes, etc.) and a shared data partition with the "heavy" stuff (documents, pictures, videos, music, etc.). [131064830110] |Symbolic links in each of the distributions own home folder would then point to the shared partition. [131064830120] |Afterwards, this can be expanded at will to include other stuff as well. [131064830130] |Example: You have eclipse IDE installed on all distributions and want the same configuration and source files available everywhere. [131064830140] |You can create symbolic links on each distributions home folder to the shared one to achieve this. [131064830150] |This would be Ubuntu: [131064830160] |And openSUSE: [131064830170] |And so on.. [131064830180] |If you're not sure about interfering configuration files, try the second, safer way and find out which home components can be shared easily between the installed distributions. [131064840010] |Bash: Get Distribution Name and Version Number [131064840020] |I'm working on a simple bash script that should be able to run on Ubuntu and CentOS distributions (support for Debian and Fedora/RHEL would be a plus) and I need to know the name and version of the distribution the script is running (in order to trigger specific actions, for instance the creation of repositories). [131064840030] |So far what I've got is this: [131064840040] |This seems to work, returning ubuntu or centos (I haven't tried others) as the release name however, I have a feeling that there must be an easier, more reliable way of finding this out... [131064840050] |No? [131064850010] |Most recent distributions have a tool called lsb_release. [131064850020] |Your /etc/*-release will be using /etc/lsb-release anyway, so if that file is there, running lsb_release should work too. [131064850030] |I think uname to get ARCH is still the best way. [131064850040] |e.g. [131064850050] |Or you could just source /etc/lsb-release: [131064850060] |If you have to be compatible with older distributions, there is no single file you can rely on. [131064850070] |Either fall back to the output from uname, e.g. [131064850080] |or handle each distribution separately: [131064850090] |Of course, you can combine all this: [131064850100] |Finally, your ARCH obviously only handles Intel systems. [131064850110] |I'd either call it BITS like this: [131064850120] |Or change ARCH to be the more common, yet unambiguous versions: x86 and x64 or similar: [131064850130] |but of course that's up to you. [131064860010] |If you can't or don't want to use the LSB release file (due to the dependencies the package brings in), you can look for the distro-specific release files. [131064860020] |Bcfg2 has a probe for the distro you might be able to use: http://trac.mcs.anl.gov/projects/bcfg2/browser/doc/server/plugins/probes/group.txt [131064870010] |If the file /etc/debian_version, it is Debian, or a Debian derivative. [131064870020] |This file may have a release number; on my machine it is currently 6.0.1. [131064870030] |If it is testing or unstable, it may say testing/unstable, or it may have the number of the upcoming release. [131064870040] |My impression is that on Ubuntu at least, this file is always testing/unstable, and that they don't put the release number in it, but someone can correct me if I am wrong. [131064870050] |Fedora (recent releases at least), have a similar file, namely /etc/fedora-release. [131064880010] |Type below command [131064880020] |cat /etc/issue [131064890010] |lsb_release -a. [131064890020] |Works on Debian and I guess Ubuntu, but I'm not sure about the rest. [131064890030] |Normally it should exist in all GNU/Linux distributions since it is LSB (Linux Standard Base) related. [131064900010] |I'd go with this as a first step: [131064900020] |Gentoo, RedHat, Arch &SuSE have a file called e.g. /etc/gentoo-release. [131064900030] |Seems to be popular, check this site about release-files. [131064900040] |Debian &Ubuntu should have a /etc/lsb-release which contains release info also, and will show up with the previous command. [131064900050] |Another quick one is uname -rv. [131064900060] |If the kernel installed is the stock distro kernel, you'll usually sometimes find the name in there. [131064910010] |In order of most probable success, these: [131064910020] |cover most cases (AFAIK): Debian, Ubuntu, Slackware, Suse, Redhat, Gentoo, *BSD and perhaps others. [131064920010] |This is a duplicate of Bash: Get Distribution Name and Version Number [131064920020] |In brief, most of the time lsb_release -a or lsb_release -si will work. [131064920030] |Or you can use a script like this to handle the case where lsb_release is not available. [131064920040] |If you're typing it interactively, and so prefer something easy to type, and don't care what the output is, you can just do. [131064930010] |How do I make multiple directories at once in a directory ? [131064930020] |I know with mkdir I can do mkdir A B C D E F to create each directory. [131064930030] |How do I create directories A-Z or 1-100 with out typing in each letter or number? [131064940010] |It's probably easiest to just use a for loop: [131064940020] |You need at least bash 3.0 though; otherwise you have to use something like seq [131064950010] |On Linux you can generate sequences of digits with the "seq" command, but this doesn't exist on all Unix systems. [131064950020] |For example to generate directories from 1 to 100: [131064950030] |While you can certainly make directories A to Z with shell utils: [131064950040] |It's probably a lot less ugly to just use Perl: [131064960010] |The {} syntax is Bash syntax not tied to the for construct. [131064960020] |is sufficient all by itself. [131064960030] |http://www.gnu.org/software/bash/manual/bashref.html#Brace-Expansion [131064970010] |You can also do more complex combinations (try these with echo instead of mkdir so there's no cleanup afterwards): [131064970020] |Compare [131064970030] |to [131064970040] |If you have Bash 4, try [131064970050] |and [131064980010] |Debian 6.0 RC1: Cannot login as root and user is not a sudoer [131064980020] |After installing Debian 6.0 rc1, I can't do anything as administrator because I cannot login as root (but I did setup the password during installation) and user is not a sudoer either. [131064980030] |Did I miss anything? [131064980040] |[update] I used Live CD to boot and edit the sudoer file, and the problem is fixed. [131064990010] |How are you trying to login? [131064990020] |From the terminal, ssh, X (GDM/KDM/XDM/etc)? [131064990030] |Have you tried suing to root from the user account? [131064990040] |Having set a root password but not being able to login to the terminal as root or suing to root would indicate a problem. [131064990050] |Not being able to login to X or via ssh as root would more likely be the result of good default security restrictions. [131064990060] |If su works but you still want sudo then you can just run su -c visudo then add your useraccount to the sudoers file. [131065000010] |How to list currently not installed packages? [131065000020] |Hi, I'd like to output a list of all currently not installed packages (they are visible in Synaptic for example) using only shell commands. [131065000030] |How do I do this? [131065000040] |Thanks! [131065010010] |See the Search Term Reference in the aptitude user's manual for details. [131065020010] |From the question's tags, I assume a Debian system. [131065020020] |In Bash: aptitude search '!~i'. [131065020030] |The list is very long (more than 30k lines). [131065020040] |It can be interesting to suppress virtual packages also: aptitude search '!~i !~v' [131065030010] |Type the command as follows: [131065030020] |with this command you can check your installed RPM list after that you can easly to find which RPM is not installed. [131065040010] |How to run e2image to backup a remote CentOS server? [131065040020] |Our server is hosted out of the country, I just have SSH access. [131065040030] |I want to take an image backup of my server. [131065040040] |I found the e2image utility to take an online backup of that server, but when I run e2image -r /dev/sda1 sda1image, the command does not run, instead it shows this error: [131065040050] |Could any one help me, how I can take my whole server backup? [131065050010] |Backup script permission issue [131065050020] |I have a server with a user named deploy-user and have written a backup script to backup a number of websites associated with this user. [131065050030] |However one of the sites I'm trying to backup has a directory /home/usera/web/www.example.com/some/random_dir is owned by apache-data-user. [131065050040] |What permissions would I give deploy-user to be able to backup that directory. [131065050050] |Options I am aware of are either: [131065050060] |
  • Running the script as root, which I don't really want to do.
  • [131065050070] |
  • Adding apache-data-user and deploy-user to the same group. [131065050080] |But then apache-data-user will have to many permissions.
  • [131065050090] |Has anyone got a suggestion of the best way to backup this directory? [131065060010] |If you don't want to put deploy-user and apache-data-user into the same group then ACL suits you best. [131065070010] |Option 2 is definitely the way to go, unless you want to use ACL. [131065070020] |Note that such a group will probably only need read permissions to the directory you're referring to. [131065070030] |Another option would be to use sudo to give the deploy-user some very restricted rights to only perform backup operations as root. [131065080010] |find default apache user's group in /etc/groups [131065080020] |and add deploy-user to that group [131065090010] |If you have access control lists enabled, give deploy-user the right to read /home/usera/web/www.example.com/some/random_dir and its contents. [131065090020] |To enable ACLs, you may need to add the acl option to the entry for the filesystem in /etc/fstab and install your distribution's acl package. [131065090030] |Under Linux, the following commands give deploy-user the right to read and traverse the whole hierarchy rooted at /home/usera/web: [131065100010] |How can I assign a keyboard shortcut to a specific application in Openbox? [131065100020] |I use the web browser Uzbl and the window manager Openbox, and I wondered if I could configure openbox to add a keyboard shortcut to minimize / maximize Uzbl's window ... [131065110010] |Alt+Space, x is the default shortcut for maximize/unmaximize in most window managers. [131065110020] |Does that work? [131065110030] |Or maybe it's Alt+F6 and Alt+F7 as suggested in the Actions Documentation. [131065110040] |If not, you can add a binding using the information in the Openbox Bindings Documentation, but it sounds like you can only set shortcuts for all windows, not just for one program. [131065110050] |In brief, you find your rc.xml file, then add something like this in the middle of it: [131065110060] |Unless you meant unminimize / restore rather than maximize, i.e. a binding that works even when the window isn't focussed. [131065110070] |In that case, I'd suggest using xbindkeys and wmctrl. [131065110080] |You'd have to write a script that runs wmctrl to find the uzbl window using wmctrl -l, then run either wmctrl -a or wmctrl -R , then add an entry in .xbindkeysrc to run that script whenever a specific keyboard combination was pressed. [131065120010] |In that case, I'd suggest using xbindkeys and wmctrl. [131065120020] |I've googled for "wmctrl examples" and i found that: http://spiralofhope.com/wmctrl-examples.html#s12 [131065120030] |So i added th following lines to my openbox configuration: [131065120040] |Its works exactly as i wanted, tkanks ! [131065130010] |How to add write permissions for a group? [131065130020] |I changed permissions of a file (chmod g+w testfile) and running ls -l testfile gives: [131065130030] |I then added a user to that group ("/etc/group" has user1:x:1000:user2 line), but am failing to edit that file as user2. [131065130040] |Why is this so? [131065140010] |You might need to have user2 log out and back in (or just try ssh'ing in to create a new login session). [131065140020] |Check the output of id --groups to show the numeric group ids for a user. [131065150010] |Parse html on Linux [131065150020] |I have a couple of hundred html source code files. [131065150030] |I need to extract the contents of a particular
    element from each of these file so I'm going to write a script to loop through each file. [131065150040] |The element structure is like this: [131065150050] |Can anyone suggest a method by which I can extract the div the_div_id and all the child elements and content from a file using the linux command line? [131065160010] |The html-xml-utils package, available in most major Linux distributions, has a number of tools that are useful when dealing with HTML and XML documents. [131065160020] |Particularly useful for your case is hxselect which reads from standard input and extracts elements based on CSS selectors. [131065160030] |Your use case would look like: [131065160040] |You might get a complaint about input not being well formed depending on what you are feeding it. [131065160050] |This complaint is given over standard error and thus can be easily suppressed if needed. [131065160060] |An alternative to this would to be to use Perl's HTML::PARSER package; however, I will leave that to someone with Perl skills less rusty than my own. [131065170010] |Here's an untested Perl script that extracts
    elements and their contents using HTML::TreeBuilder. [131065170020] |If you're allergic to Perl, Python has HTMLParser. [131065170030] |P.S. [131065170040] |Do not try using regular expressions.. [131065180010] |How do you move all files (including hidden) in a directory to another? [131065180020] |Possible Duplicate: How do you move all files (including hidden) in a directory to another? [131065180030] |How do I move all files in a directory (including the hidden ones) to another directory? [131065180040] |For example, if I have a folder "Foo" with the files ".hidden" and "notHidden" inside, how do I move both files to a directory named "Bar"? [131065180050] |The following does not work, as the ".hidden" file stays in "Foo". [131065180060] |Note: Try it yourself. [131065190010] |How do you move all files (including hidden) in a directory to another? [131065190020] |How do I move all files in a directory (including the hidden ones) to another directory? [131065190030] |For example, if I have a folder "Foo" with the files ".hidden" and "notHidden" inside, how do I move both files to a directory named "Bar"? [131065190040] |The following does not work, as the ".hidden" file stays in "Foo". [131065190050] |Note: Try it yourself. [131065200010] |Linux: How to move all files from current directory to upper directory ? [131065210010] |

    From man bash

    [131065210020] |dotglob If set, bash includes filenames beginning with a '.' in the results of pathname expansion. [131065220010] |One way is to use find: [131065220020] |The -type f restricts the find command to finding files. [131065220030] |You should investigate the -type, -maxdepth, and -mindepth options of find to customize your command to account for subdirectories. [131065220040] |Find has a lengthy but very helpful manual page. [131065230010] |

    Zsh

    [131065230020] |or [131065230030] |(Leave out the (N) if you know the directory is not empty.) [131065230040] |

    Bash

    [131065230050] |

    Ksh93

    [131065230060] |If you know the directory is not empty: [131065230070] |

    Standard (POSIX) sh

    [131065230080] |If you're willing to let the mv command return an error status even though it succeeded, it's a lot simpler: [131065230090] |

    GNU find and GNU mv

    [131065230100] |

    Standard find

    [131065230110] |If you don't mind changing to the source directory: [131065240010] |How to remove an audio track from an mp4 video file? [131065240020] |I have an mp4 video file with multiple audio tracks. [131065240030] |I would like to strip away the rest of the tracks and keep just one. [131065240040] |How do I do this? [131065250010] |FFMPEG might be a helpful solution for you. [131065260010] |http://howto-pages.org/ffmpeg/#strip [131065260020] |please see the example. [131065270010] |You could try avidemux which handles only one audio file internally (and therefore forces you to select one specific track). [131065270020] |I've never tried it with .mp4 but it works with other formats. [131065280010] |First run ffmpeg -i file.mp4 to see which streams exists in your file. [131065280020] |You should see something like this: [131065280030] |Then run ffmpeg -map 0:0 -map 0:2 -acodec copy -vcodec copy -i file.mp4 new_file.mp4 to copy video stream and 2nd audio stream to new_file.mp4. [131065290010] |How does Linux kernel compare to microkernel architectures? [131065290020] |I read once that one advantage of a microkernel architecture is that you can stop/start essential services like networking and filesystems, without needing to restart the whole system. [131065290030] |But considering that Linux kernel nowadays (was it always the case?) offers the option to use modules to achieve the same effect, what are the (remaining) advantages of a microkernel? [131065300010] |A microkernel limits the time the system is in kernel mode, as opposed to userspace, to the absolute minimum possible. [131065300020] |If a crash happens in kernel mode, the entire kernel goes down, and that means the entire system goes down. [131065300030] |If a crash happens in user mode, just that process goes down. [131065300040] |Linux is robust in this regard, but it's still possible for any kernel subsystem to write over the memory of any other kernel subsystem, either purposefully or accidentally. [131065300050] |The microkernel concept puts a lot of stuff that is traditionally kernel mode, such as networking and device drivers, in userspace. [131065300060] |Since the microkernel isn't really responsible for a lot, that also means it can be simpler and more reliable. [131065300070] |Think of the way the IP protocol, by being simple and stupid, really leads to robust networks by pushing complexity to the edges and leaving the core lean and mean. [131065310010] |Microkernels require less code to be run in the innermost, most trusted mode than monolithic kernels. [131065310020] |This has many aspects, such as: [131065310030] |
  • Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will. [131065310040] |This is mostly achievable on Linux, through modules.
  • [131065310050] |
  • Microkernels are more robust: if a non-kernel component crashes, it won't take the whole system with it. [131065310060] |A buggy filesystem or device driver can crash a Linux system. [131065310070] |Linux doesn't have any way to mitigate these problems other than coding practices and testing.
  • [131065310080] |
  • Microkernels have a smaller trusted computing base. [131065310090] |So even a malicious device driver or filesystem cannot take control of the whole system (for example a driver of dubious origin for your latest USB gadget wouldn't be able to read your hard disk).
  • [131065310100] |
  • A consequence of the previous point is that ordinary users can load their own components that would be kernel components in a monolithic kernel.
  • [131065310110] |Unix GUIs are provided via X window, which is userland code (except for (part of) the video device driver). [131065310120] |Many modern unices allow ordinary users to load filesystem drivers through FUSE. [131065310130] |Some of the Linux network packet filtering can be done in userland. [131065310140] |However, device drivers, schedulers, memory managers, and most networking protocols are still kernel-only. [131065310150] |A classic (if dated) read about Linux and microkernels is the Tanenbaum–Torvalds debate. [131065310160] |Twenty years later, one could say that Linux is very very slowly moving towards a microkernel structure (loadable modules appeared early on, FUSE is more recent), but there is still a long way to go. [131065310170] |Another thing that has changed is the increased relevance of virtualization on desktop and high-end embedded computers: for some purposes, the relevant distinction is not between the kernel and userland but between the hypervisor and the guest OSes. [131065320010] |The case is that linux kernel is a hybrid of monolithic and microkernel. [131065320020] |In a pure monolithic implementation there are no modules loading at runtime. [131065330010] |Just take a look at x86 architecture -- monolithic kernel only uses rings 0 and 3. [131065330020] |A waste, really. [131065330030] |But than again it can be faster, because of less context switching. [131065340010] |You should read the other side of the issue: [131065340020] |Extreme High Performance Computing or Why Microkernels suck [131065340030] |The File System Belongs In The Kernel [131065350010] |How come I suddenly don't have any audio? [131065350020] |Just a few minutes ago my sound was working, and then it stopped! [131065350030] |I suspect it was caused by me checking out some video editors. [131065350040] |How do I go about troubleshooting this? [131065350050] |What I've so far tried: [131065350060] |
  • I ran lsof | grep audio and lsof | grep delete to see if there's any process locking the audio path(?), but nothing looks suspect.
  • [131065350070] |
  • VLC and MPlayer are affected, while Quod Libet (GStreamer) isn't.
  • [131065350080] |[update] Strange one. [131065350090] |I don't know if it has anything to do with Quod Libet, but I noticed that after closing (and reopening) it, the problem seemed to disappear. [131065350100] |Note that I haven't logged out yet. [131065360010] |You might try logging out of your desktop and logging back in. [131065360020] |Sometimes this is enough to kill any locks, delete tmp files, reset any other configuration gizmos that might have gotten left by a mis-behaving application. [131065360030] |You might also try poking through the Sound Preferences configuration for the hardware selections and make sure that your selected output hardware looks correct. [131065360040] |Knowing which distro you're using might help in getting more suggestions. [131065370010] |It might be because of pulseaudio. [131065370020] |Try killing it and rerunning the application. [131065380010] |Any way to sync directory structure when the files are already on both sides? [131065380020] |I have two drives with the same files, but the directory structure is totally different. [131065380030] |Is there any way to 'move' all the files on the destination side so that they match the structure of the source side? [131065380040] |With a script perhaps? [131065380050] |For example, drive A has: [131065380060] |Whereas drive B has: [131065380070] |The files in question are huge (800GB), so I don't want to re-copy them; I just want to sync the structure by creating the necessary directories and moving the files. [131065380080] |I was thinking of a recursive script that would find each source file on the destination, then move it to a matching directory, creating it if necessary. [131065380090] |But that's beyond my abilities ... [131065380100] |Any help greatly appreciated! [131065380110] |Thanks [131065380120] |UPDATE: Another elegant solution was given here: http://superuser.com/questions/237387/any-way-to-sync-directory-structure-when-the-files-are-already-on-both-sides/238086 [131065390010] |How about something like this: [131065390020] |This assumes that names of the files you want to sync are unique across the whole drive: otherwise there's no way it can be fully automated (however, you can provide a prompt for user to choose which file to pick if there's more that one.) [131065390030] |The script above will work in simple cases, but may fail if name happens to contain symbols which have special meaning for regexps. [131065390040] |The grep on list of files can also take a lot of time if there's lot of files. [131065390050] |You may consider translating this code to use hashtable which will map filenames to paths, e.g. in Ruby. [131065400010] |I'll go with Gilles and point you to Unison as suggested by hasen j. Unison was DropBox 20 years before DropBox. [131065400020] |Rock solid code that a lot of people (myself included) use every day -- very worthwhile to learn. [131065400030] |Still, join needs all the publicity it can get :) [131065400040] |This is only half an answer, but I have to get back to work :) [131065400050] |Basically, I wanted to demonstrate the little-known join utility which does just that: joins two tables on a some field. [131065400060] |First, set up a test case including file names with spaces: [131065400070] |(edit some directory and/or file names in new). [131065400080] |Now, we want to build a map: hash -> filename for each directory and then use join to match up files with the same hash. [131065400090] |To generate the map, put the following in makemap.sh: [131065400100] |makemap.sh spits out a file with lines of the form, 'hash "filename"', so we just join on the first column: [131065400110] |This generates moves.txt which looks like this: [131065400120] |The next step would be to actually do the moves, but my attempts got stuck on quoting... mv -i and mkdir -p should come handy. [131065410010] |Here's my attempt at an answer. [131065410020] |As a forewarning, all my scripting experience comes from bash, so if you are using a different shell, the command names or syntax may be different. [131065410030] |This solution requires creating two seperate scripts. [131065410040] |This first script is responsible for actually moving the files on the destination drive. [131065410050] |The second script creates the md5 map file used by the first script and then calls the first script on every file in the destination drive. [131065410060] |Basically, what is going on is the two scripts similuate an associative array with $md5_map_file. [131065410070] |First, all the md5s for the files on the source drive are computed and stored. [131065410080] |Associated with the md5s are the relative paths from the drive's root. [131065410090] |Then, for each file on the destination drive, the md5 is computed. [131065410100] |Using this md5, the path of that file on the source drive is looked up. [131065410110] |The file on the destination drive is then moved to match the path of the file on the source drive. [131065410120] |There are a couple of caveats with this script: [131065410130] |
  • It assumes that every file in $dst is also in $src
  • [131065410140] |
  • It does not remove any directories from $dst, only moves the files. [131065410150] |I am currently unable to think of a safe way to do this automatically
  • [131065410160] |Good luck and I hope this helped. [131065420010] |Use Unison as suggested by hasen j. [131065420020] |I'm leaving this answer up as a potentially useful scripting example or for use on a server with only basic utilities installed. [131065420030] |I'll assume that the file names are unique throughout the hierarchy. [131065420040] |I'll also assume that no file name contains a newline, and that the directory trees only contain directories and regular files. [131065420050] |
  • First collect the file names on the source side. [131065420060] |
  • Then move the files into place on the destination side. [131065420070] |First, create a flattened tree of files on the destination side. [131065420080] |Use ln instead of mv if you want to keep hard links around in the old hierarchy. [131065420090] |
  • If some files may be missing in the destination, create a similarly flattened /A.staging and use rsync to copy the data from the source to the destination. [131065420100] |
  • Now rename the files into place. [131065420110] |Equivalently: [131065420120] |
  • Finally, if you care about the metadata of the directories, call rsync with the files already in place. [131065420130] |Note that I haven't tested the snippets in this post. [131065420140] |Use at your own risk. [131065420150] |Please report any error in a comment. [131065430010] |There's a utility called unison: [131065430020] |http://www.cis.upenn.edu/~bcpierce/unison/ [131065430030] |Description from site: [131065430040] |Unison is a file-synchronization tool for Unix and Windows. [131065430050] |It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other. [131065440010] |ls, regexp and environment variable [131065440020] |Hello, I wanted to declare an environment variable that stocks all the extensions of video files so I can use it when using the shell. [131065440030] |I tried several things but never got it to work: If in my .bash_profile I put: [131065440040] |it only takes the last element: [131065440050] |If in my .bash_profile I put: [131065440060] |or [131065440070] |Then, when I display it it looks OK, but it doesn't work when I use it in a ls for example: [131065440080] |And when I run the exact same command without using the variable, it works: [131065440090] |Also, when I reboot, it looks like my .bash_profile is not loading, and the $VIDEOS variable is empty. [131065440100] |I have to do a source ~/.bash_profile in order to get it to work (and I have to redo so every time I open a new terminal. [131065440110] |Any idea? [131065440120] |Thanks! [131065450010] |Your command is being expanded to this: [131065450020] |Run this to see what's happening: [131065450030] |(it's called brace expansion) [131065450040] |The second problem is that bash does brace expansion before parameter expansion, not after it, so anything that looks like your solution will be messy. [131065450050] |You would have to do something like this: [131065450060] |which will get annoying to type every time. [131065450070] |How about something like this: [131065450080] |Then instead of doing: [131065450090] |just do this: [131065450100] |or if you need to pass it to a command: [131065450110] |This part working: [131065450120] |could be the clue to .bash_profile not working. [131065450130] |For example, it might mean you are using zsh. [131065450140] |Please tell us what this does: [131065450150] |so we can figure out which file you have to put it in. [131065460010] |You could probably use arrays for this. [131065460020] |Your syntax will have to be a bit different: [131065460030] |and then do [131065460040] |If you only want to list videos with names starting with foo, you would do [131065470010] |How do font managers work in Fedora? [131065470020] |I am looking to do some font management on my Fedora system. [131065470030] |I have installed both Font Manager and Fontmatrix. [131065470040] |My goal was to be able to have more fonts installed than I wanted displayed, and to be able to enable/disable fonts (or groups thereof) whenever I wanted to. [131065470050] |Both programs claim to enable or disable fonts. [131065470060] |I can't seem to get Fontmatrix to do anything other than be a comprehensive font information source (glyphs, etc.) The enable/disable doesn't appear to work, and the documentation is less than helpful. [131065470070] |I am able to disable/enable fonts in Font Manager. [131065470080] |I had to recreate my Gnome settings, though, because I accidentally disabled all fonts, and even re-enabling them did not fix my panel fonts. [131065470090] |There wasn't anything I could do, short of removing my local configuration and logging out/in, that would get those fonts back. [131065470100] |So. [131065470110] |What exactly do these programs do when they disable a font? [131065470120] |And what trashed my panel fonts? [131065470130] |I know Monospace was still installed/enabled, and nothing I could do would change the panel information. [131065470140] |Thanks in advance! [131065480010] |How to install Debian from USB? (Using full size image not netinstall) [131065480020] |I learned of a way to install Debian from USB using the netinstall image, which is fine. [131065480030] |However it means I have to spend hours and hours downloading packages to do the install. [131065480040] |Is there a way I can simply download (for example) the CD1 with most of gnome and then use that? [131065480050] |The netinstall method using this does not work because there is not enough space. [131065480060] |(I have enough space, it is that the method has a limitation). [131065480070] |I rarely have CDs on hand and some machines do not have CD/DVD drives anyway. [131065480080] |I will research on this topic and answer my own question if need be, however any help in the meantime is appreciated. [131065490010] |How about downloading the CD1 ISO, then put it on a USB and boot? [131065490020] |(My favourite) [131065490030] |How about using an automated tool such as UNetbootin? [131065490040] |Here is another tool from Pendrivelinux. [131065500010] |I had problems with the netinstall stable 64. [131065500020] |I eventually overcame this: I found my binaries of nm and nm-applet and added the following to the top of the files with nano: #!/bin/busybox I then cated them on to the ubninit that unetbootin puts onto the usb drive like so: cat /usr/bin/nm >>/media/sdc1/ubninit and: cat /usr/bin/nm-applet >>/media/sdc1/ubninit if you try this and it doesnt work no big loss but just remember to delete the line you added to nm and nm-applet. [131065510010] |As of Debian 6.0 (Squeeze), the netinstall and disc 1 of the regular install CD/DVDs are 'hybrid' ISOs. [131065510020] |They can be burned to an optical disc and booted or copied onto a USB drive and booted. [131065510030] |To copy the ISO onto a USB drive from a linux system all you need to do is cat the ISO onto the drive. [131065510040] |http://www.debian.org/releases/squeeze/i386/ch04s03.html.en#usb-copy-isohybrid [131065520010] |creating a completely offline installer using simple-cdd [131065520020] |i'm trying to create a custom cd using my own minimal debian installed setup. [131065520030] |i just simply typed build-simple-cdd. [131065520040] |The process was successful, but when i tested it on my virtualbox and disabling the network, it looks for the kernel. [131065520050] |did i missed anything on my setup? [131065530010] |My guess would be that your image doesn't support the architecture of your vm (e.g., 64-bit iso but your vm is specified as 32-bit). [131065530020] |Mount the CD and check the contents manually to make sure that what you expect to be there is actually there. [131065540010] |Which desktop indexer indexes xml file content? [131065540020] |My favorite indexer doesn't have this capability, and I'm looking for an alternative that does. [131065550010] |How to redirect stderr,out to different files and also display in terminal? [131065550020] |To make it clear, I want to see the output of a command in the terminal as if there were no redirection. [131065550030] |Also stderr need to be redirected to err.log and stdout need to be redirected to stdout.log. [131065550040] |It would be also nice to have the exact copy of what is shown in terminal, i.e errors printed as and when it occurs in a seperate file stdouterr.log. [131065560010] |Use the tee command as follows: [131065560020] |3>&1 1>&2 2>&3 is how you swap stderr and stdout, because tee can only accept stdout. [131065560030] |Take a look at Unix tee command for more advanced redirections using tee. [131065570010] |@dogbane, Thanks. [131065570020] |I found another way too which saves both the streams approximately in the order as they would get printed without redirection. [131065570030] |command 2> >(tee errlog | tee -a bothLog >/dev/tty ) | tee outlog | tee -a bothLog [131065570040] |But this works only with the shells which supports process substitution. [131065580010] |How to "jail" a process without being root? [131065580020] |Were I root, I could simply create a dummy user/group, set file permissions accordingly and execute the process as that user. [131065580030] |However I am not, so is there any way to achieve this without being root? [131065590010] |I guess you can have some luck with LD_PRELOAD to intercept access to certain files, but this might be really tricky. [131065600010] |This is a fundamental limitation of the unix permission model: only root can delegate. [131065600020] |You don't need to be root to run a virtual machine (not true of all VM technologies), but this is a heavyweight solution. [131065600030] |User-mode Linux is a relatively lightweight Linux-on-Linux virtualization solution. [131065600040] |It's not that easy to set up; you'll need to populate a root partition (in a directory) with at least the minimum needed to boot (a few files in /etc, /sbin/init and its dependencies, a login program, a shell and utilities). [131065610010] |One known way to achieve isolation is through the seccomp sandboxing approach used in Google Chromium. [131065610020] |But this approach supposes that you write a helper which would process some (the allowed ones) of the "intercepted" file access and other syscalls; and also, of course, make effort to "intercept" the syscalls and redirect them to the helper (perhaps, it would even mean such a thing as replacing the intercepted syscalls in the code of the controlled process; so, it doesn't sound to be quite simple; if you are interested, you'd better read the details rather than just my answer). [131065610030] |More related info (from Wikipedia): [131065610040] |
  • http://en.wikipedia.org/wiki/Seccomp
  • [131065610050] |
  • http://code.google.com/p/seccompsandbox/wiki/overview
  • [131065610060] |
  • LWN article: Google's Chromium sandbox, Jake Edge, August 2009
  • [131065610070] |
  • seccomp-nurse, a sandboxing framework based on seccomp.
  • [131065610080] |(The last item seems to be interesting if one is looking for a general seccomp-based solution outside of Chromium. [131065610090] |There is also a blog post worth reading from the author of "seccomp-nurse": SECCOMP as a Sandboxing solution ?.) [131065610100] |The illustration of this approach from the "seccomp-nurse" project: [131065610110] |

    A "flexible" seccomp possible in the future of Linux?

    [131065610120] |There used to appear in 2009 also suggestions to patch the Linux kernel so that there is more flexibility to the seccomp mode--so that "many of the acrobatics that we currently need could be avoided". [131065610130] |("Acrobatics" refers to the complications of writing a helper that has to execute many possibly innocent syscalls on behalf of the jailed process and of substituting the possibly innocent syscalls in the jailed process.) [131065610140] |An LWN article wrote to this point: [131065610150] |One suggestion that came out was to add a new "mode" to seccomp. [131065610160] |The API was designed with the idea that different applications might have different security requirements; it includes a "mode" value which specifies the restrictions that should be put in place. [131065610170] |Only the original mode has ever been implemented, but others can certainly be added. [131065610180] |Creating a new mode which allowed the initiating process to specify which system calls would be allowed would make the facility more useful for situations like the Chrome sandbox. [131065610190] |Adam Langley (also of Google) has posted a patch which does just that. [131065610200] |The new "mode 2" implementation accepts a bitmask describing which system calls are accessible. [131065610210] |If one of those is prctl(), then the sandboxed code can further restrict its own system calls (but it cannot restore access to system calls which have been denied). [131065610220] |All told, it looks like a reasonable solution which could make life easier for sandbox developers. [131065610230] |That said, this code may never be merged because the discussion has since moved on to other possibilities. [131065610240] |This "flexible seccomp" would bring the possibilities of Linux closer to providing the desired feature in the OS, without the need to write helpers that complicated. [131065620010] |Another trustworthy isolation solution (besides a seccomp-based one) would be the complete syscall-interception through ptrace, as explained in the manpage for fakeroot-ng: [131065620020] |Unlike previous implementations, fakeroot-ng uses a technology that leaves the traced process no choice regarding whether it will use fakeroot-ng's "services" or not. [131065620030] |Compiling a program statically, directly calling the kernel and manipulating ones own address space are all techniques that can be trivially used to bypass LD_PRELOAD based control over a process, and do not apply to fakeroot-ng. [131065620040] |It is, theoretically, possible to mold fakeroot-ng in such a way as to have total control over the traced process. [131065620050] |While it is theoretically possible, it has not been done. [131065620060] |Fakeroot-ng does assume certain "nicely behaved" assumptions about the process being traced, and a process that break those assumptions may be able to, if not totally escape then at least circumvent some of the "fake" environment imposed on it by fakeroot-ng. [131065620070] |As such, you are strongly warned against using fakeroot-ng as a security tool. [131065620080] |Bug reports that claim that a process can deliberatly (as opposed to inadvertly) escape fake‐ root-ng's control will either be closed as "not a bug" or marked as low priority. [131065620090] |It is possible that this policy be rethought in the future. [131065620100] |For the time being, however, you have been warned. [131065620110] |Still, as you can read it, fakeroot-ng itself is not designed for this purpose. [131065620120] |(BTW, I wonder why they have chosen to use the seccomp-based approach for Chromium rather than a ptrace-based...) [131065630010] |But well, of course, the desired "jail" guarantees are implementable by programming in user-space (without additional support for this feature from the OS; maybe that's why this feature hasn't been included in the first place in the design of OSes); with more or less complications. [131065630020] |The mentioned ptrace- or seccomp-based sandboxing can be seen as some variants of implementing the guarantees by writing a sandbox-helper that would control your other processes, which would be treated as "black boxes", arbitrary Unix programs. [131065630030] |Another approach could be to use programming techniques that can care about the effects that must be disallowed. [131065630040] |(It must be you who writes the programs then; they are not black boxes anymore.) [131065630050] |To mention one, using a pure programming language (which would force you to program without side-effects) like Haskell will simply make all the effects of the program explicit, so the programmer can easily make sure there will be no disallowed effects. [131065630060] |I guess, there are sandboxing facilities available for those programming in some other language, e.g., Java. [131065640010] |More similar Qs with more answers worth attention: [131065640020] |
  • http://stackoverflow.com/q/3859710/94687
  • [131065640030] |
  • http://stackoverflow.com/q/4410447/94687
  • [131065640040] |
  • http://stackoverflow.com/q/4249063/94687
  • [131065640050] |
  • http://stackoverflow.com/q/1019707/94687
  • [131065640060] |some of the answers there point to specific solutions not yet mentioned here. [131065640070] |In general, the answers there cover the full described range of possibilities and even more: [131065640080] |
  • (virtual machines/OS),
  • [131065640090] |
  • (kernel extension (like SELinux; mentioned in comments here)),
  • [131065640100] |
  • chroot-based helpers,
  • [131065640110] |
  • ptrace-based (of the tools not mentioned here, I have noted Geordi for myself, because I liked that the controlling program is written in Haskell),
  • [131065640120] |
  • (seccomp-based),
  • [131065640130] |
  • and unshare-based solutions--not mentioned here--e.g., unsharing mount-points (combined with FUSE?) could perhaps be a part of a working solution for you wanting to confine filesystem accesses of your untrusted processes,
  • [131065640140] |
  • and special user-space programming/compiling (NaCl--not mentioned here--belongs to this group, doesn't it?)
  • [131065640150] |Some pages accumulating info on this topic were also pointed at in the answers there: [131065640160] |
  • page on Google Chrome's sandboxing methods for Linux
  • [131065640170] |
  • sandboxing.org group
  • [131065650010] |How to check if $PWD is a subdirectory of a given path [131065650020] |E.g. check if $PWD is a subdirectory of /home. [131065650030] |In other words I'm searching for a bash string operation to check if one string starts with another. [131065660010] |I don't understand the question too well, but to find the parent of $PWD, do dirname $PWD. [131065660020] |To find the parent of the parent, run dirname $(dirname $PWD), and so on... [131065670010] |Crude version: [131065670020] |Has the disadvantage that one has to count characters first and one can't replace /home/ by something general like $1. [131065670030] |edit (thanks @Michael) for the generalization to compare with $VAR one can use [131065680010] |How about this: [131065680020] |If $PWD starts with "/home/", it gets stripped off in the left side, which means it won't match the right side, so "!=" returns true. [131065690010] |Hm, it's pity that [ doesn't have an option of testing STRING1 starts with STRING2 condition. [131065690020] |You may try echo $PWD | grep '^$VAR', but it can fail in interesting ways when VAR contains special symbols. [131065690030] |awk's index function should be able to do the trick. [131065690040] |But all this seems just too heavy for such an easy thing to test. [131065700010] |Using awk: [131065710010] |If the searched part of path is found I "empty" the variable : [131065720010] |To test if a string is a prefix of another, in any Bourne-style shell: [131065720020] |The same principle works for a suffix or substring test. [131065720030] |Note that in case constructs, unlike in file names, * matches any character, including a / or an initial .. [131065720040] |In shells that implement the [[ … ]] syntax (i.e. bash, ksh and zsh), it can be used to match a string against a pattern. [131065720050] |(Note that the [ command can only test strings for equality.) [131065720060] |If you're specifically testing whether the current directory is underneath /home, a simple substring test is not enough, because of symbolic links. [131065720070] |If /home is a filesystem of its own, test whether the current directory (.) is on that filesystem. [131065720080] |If you have the NetBSD, OpenBSD or GNU (i.e. Linux) readlink, you can use readlink -f to strip symbolic links from a path. [131065720090] |Otherwise, you can use pwd to show the current directory. [131065720100] |But you must take care not to use a shell built-in if your shell tracks cd commands and keeps the name you used to reach the directory rather than its “actual” location. [131065730010] |Flash running in Chromium and FF at once, why no sound in the second browser? [131065730020] |I work as a student assistant Linux admin and I just packaged up Adobe's "Square" plugin to get 64bit FF running flash pretty well (first time I've seen it work this well), but there's one little problem I've come across thus far: if you open one browser and start using flash the second browser will not be able to output sound. [131065730030] |I realize this is probably because of what sound driver is being used but is there any good way to fix this or is this just how it is for Flash being the bane of my existence? [131065730040] |Thanks for any help! [131065740010] |Check to see if you can play sound in any second application. [131065740020] |Back when I used Linux on the desktop some audio drivers couldn't mix two audio streams. [131065740030] |I really hope that would have been fixed by now, but you never know... [131065740040] |If you really can't play two simultaneous audio sources then you'll want to install an audio mixing daemon (e.g., esound or similar). [131065740050] |A mixing daemon will intercept audio signals, mix them itself then send a single combined audio stream to the dsp. [131065740060] |But if you can play sound from a second audio source then I'm completely wrong. [131065750010] |As described on the FedoraProject wiki on Flash, you might need the PulseAudio ALSA module. [131065750020] |If one of the browser's flash plugin (or pulseaudio itself) has locked the sound device, other apps trying to use the sound device might not succeed. [131065760010] |What would be a good choice for an elastic file system (for adding storage at a later date)? [131065760020] |I'm currently running a Debian 6.0 server with EXT3, but I'd like to move over to Arch. [131065760030] |It's being used as a file server right now with a 1TB drive in it (of which 650GB in it is used). [131065760040] |What I'd like to do at a later point (when I'm not completely broke) is buy another drive and add it to the same system (for backing up my main rig). [131065760050] |What would be the easiest way of accomplishing this? [131065760060] |I've looked into RAID, but it'd be useless because I'd have to reinitialize the array every time I added a new drive. [131065760070] |Note: I'm not fussed about redundancy, it's only going to be hosting mirrored backups which I can easily remake in the case of data loss. [131065760080] |Basically: System with clean 1TB drive in, what should I do now to prepare for a new striped drive at a later date without having to reinitialize any arrays? [131065770010] |Hey, I'm stupid! [131065770020] |I can just use LVM (which I forgot can do striping). [131065770030] |The rubber-ducking debugging method comes to the rescue again [131065780010] |If you really don't care about reliability, you can use LVM and keep adding physical volumes to a single volume group. [131065780020] |That is, you would have a single volume group acting as a virtual drive, made up of several physical volumes (the actual drives). [131065780030] |Instead of PC-style partitions, you'd create logical volumes for filesystems and swap. [131065780040] |LVM is a good idea anyway if you're planning to extend your storage or move stuff around. [131065780050] |It's a lot easier to resize an LVM volume or move it to a different drive than to do this for PC partitions, and all the LVM stuff can be done online (i.e. while running from the mounted volume). [131065780060] |Linux's RAID subsystem can grow RAID-5 and RAID-6 arrays (it's slow, but can be done online), but curiously not linear arrays, so you'd have to start with at least two disks. [131065780070] |You could also look into ZFS, a filesystem with built-in volume management. [131065780080] |I don't know what its capabilities for adding storage are.