[131082770010] |Incremental system backup and restore w/ rsync or rdiff-backup issues [131082770020] |I've been reading about using rsync and rdiff-backup to make incremental backups of my entire Linux system. [131082770030] |I am planning to practise that by setting up a Linux system, making some changes to it, erasing everything and restoring the system (all with the convenience of doing it in a virtual machine). [131082770040] |But I want to clarify a few things before taking the plunge: [131082770050] |
  • How do I get fine control about what gets included in and excluded from the backup? i.e. by default would everything (including invisible "." files, trash, etc.) be backed up? [131082770060] |If so, how can I exclude some of them?
  • [131082770070] |
  • The rdiff-backup tutorial tells you how to retrieve a file from an arbitrary time in the past (as long as you still have the backup). [131082770080] |What if I want to restore the complete system on a new computer? [131082770090] |How do I do that?
  • [131082770100] |
  • Suppose I make backups of my system on machine A. [131082770110] |A few years later I buy a new computer B, and want to restore my system from the backup of machine A. [131082770120] |Since the hardware might be totally different, will my new system work when I try to start it up after the restore? [131082770130] |How can I ensure a smooth restore onto a new machine?
  • [131082770140] |Thanks for your answers. [131082780010] |(If you have three questions, it's better to ask three separate questions. [131082780020] |But since the answers are short I'll answer them all.) [131082780030] |
  • Both rsync and rdiff-backup have fairly powerful file selection mechanisms, based on inclusion and exclusion rules. [131082780040] |I wrote an rsync filter tutorial. [131082780050] |Rdiff-backup's filters are based on the same principles but the details are different, see “file selection” section in the manual.
  • [131082780060] |
  • Restore the whole backup (i.e. the root directory of the backup tree) as of now: rdiff-backup -r now /path/to/backup /where/to/restore.
  • [131082780070] |
  • There's nothing in the storage format that depends on your hardware. [131082780080] |You can save and restore on different machines, it doesn't matter.
  • [131082790010] |Adobe Flash App based Linux Distro [131082790020] |What would be the best way to create a small Linux image that just runs a single Flash SWF on startup (full screen)? [131082790030] |Are there distros catered to something like this, or is my best bet manually slimming down an image and then adding something into one of the startup scripts? [131082790040] |Also, what's the best way to run an SWF outside of a browser? gnash? swfdec? other? [131082800010] |If you want something really (I mean really) tiny, check out Linux From Scratch. [131082800020] |I've build over 6 system with it (custom servers), and it's amazingly small and fast (if you build it to be). [131082800030] |The downside is that it takes a really long to build &compile, but if you're up for it, go ahead. [131082800040] |I warn you, though, that you have to configure everything in the system. [131082800050] |One build for me takes 8 hours (unattended completely), so go and get a coffee (or 4) while you wait. [131082810010] |Gentoo could be a good choice. [131082810020] |Use flags can be really helpful to cut out all the unnecessary stuff. [131082810030] |And once you done it, easy to maintain and keep it up2date. [131082810040] |Tiny Core is very small by default (~10 MB, Micro Core is ~6 MB). [131082810050] |I recommend to take a look at these too. [131082820010] |Are command line arguments 'safe' from other users? [131082820020] |If I invoke a command with arguments like so: [131082820030] |...can other users view the arguments passed to the command? [131082830010] |In general yes, they can see it. [131082830020] |This is from the w man page: [131082830030] |The following entries are displayed for each user: login name, the tty name, the remote host, login time, idle time, JCPU, PCPU, and the com‐ mand line of their current process. [131082830040] |The complete command line of your currently running process will be displayed. [131082830050] |That's why you do not want to supply things like passwords through command line arguments. [131082840010] |In general, command line arguments are visible to all. [131082840020] |For example, as a non-root user on OpenBSD, I can see arguments of processes running as root: [131082840030] |On Linux, you'll notice that all /proc/*/cmdline files are world-readable. [131082840040] |There may be highly specific settings in which command line arguments remain private. [131082840050] |For example, SELinux and Solaris can altogether hide processes from other users. [131082840060] |But unless you absolutely know you're in such a setting, assume command line arguments are public. [131082850010] |On default setups they are able but as it have been written processes can overwrite it. [131082850020] |Still it can be used to race condition/timing attachs. [131082850030] |However grsecurity patchset had included patch which allowed only the owner (and root) to see his/her processes. [131082860010] |What methods are used to encrypt passwords in /etc/passwd and /etc/shadow? [131082860020] |A careful examination of the /etc/passwd and /etc/shadow files reveal that the passwords stored are hashed using some form of hashing function. [131082860030] |A quick Google search reveals that by default, the passwords are encrypted using DES. [131082860040] |If an entry begins with $, then it indicates that some other hashing function was used. [131082860050] |For example, some entries on my Ubuntu machine begin with $6$... [131082860060] |What do the various numbers represent? [131082870010] |The full list is in man 3 crypt (web version): [131082870020] |(Blowfish can be either $2$ or $2a$ according to Wikipedia Crypt (Unix).) [131082870030] |So $6$ means SHA-512. [131082870040] |Which one your system uses is governed by any options passed to the pam_unix PAM module. [131082870050] |The default on the latest version of Ubuntu is set in /etc/pam.d/common-password: [131082870060] |which means that next time you change your password, it will be hashed using SHA-512, assuming your account is local, rather than NIS/LDAP/Kerberos, etc. [131082870070] |See also: [131082870080] |
  • FreeBSD crypt
  • [131082870090] |
  • ArchLinux Blowfish passwords
  • [131082870100] |
  • NetBSD crypt(3)
  • [131082870110] |
  • w3schools crypt
  • [131082880010] |Why is my fstab entry for an external USB drive not working? [131082880020] |I have an external USB drive which my system recognizes as /dev/sdb1. I want to have it automounted with 755 permissions on boot and shared over the network with samba. [131082880030] |I created the mount point /mnt/mybook for it, and I've mounted it manually with no problems. [131082880040] |If I do "mount /dev/sdb1 /mnt/mybook", it mounts correctly and I can access the contents. [131082880050] |I figured this would be simple enough, so I read up on fstab and came up with the following line for it: [131082880060] |UUID=C252-9CA3 /mnt/mybook vfat defaults,mode=755 0 0 [131082880070] |I got the UUID from blkid. [131082880080] |When I reboot, the drive is not automounted, much less with the 755 permissions I want. [131082880090] |How can I make it so the drive gets correctly automounted with the desired permissions? [131082890010] |You could try an alternate approach, which is to recognize your device at the udev level and use /dev/mybook-partition in /etc/fstab. [131082890020] |Put something like the following in /etc/udev/rules.d/dwilliams.rules: [131082890030] |The section on Auto mounting USB devices in the Arch wiki for udev might help you further. [131082900010] |How to get hashed password in /etc/shadow using getpwnam() ? [131082900020] |Apparently there is a function (getpwnam) that given a username, will return the appropriate entry in /etc/passwd with the other details for that user (shell, uid, gid, etc.). [131082900030] |However, I have no idea how to get that function working with the shadow password file (/etc/shadow). [131082900040] |Is this possible? [131082900050] |If it helps, the application will be running as root. [131082910010] |The whole point of the shadow password file is that getpwnam doesn't return passwords from it. [131082910020] |You need to look at man 3 shadow and getspnam in particular. [131082920010] |That is not possible due to 2 reasons: [131082920020] |
  • The shadow file is one method of authentication in modern system. [131082920030] |Some of them does not involve real password at all - what should return if user is authenticated by fingerprint? [131082920040] |Why should it break any application?
  • [131082920050] |
  • Giving the hashed string for non-root application would enable off-line attacks.
  • [131082920060] |It is system-specific file anyway with no so much complicated structure so if you need it you can write your own parser. [131082930010] |Vim - Get Current Directory [131082930020] |I'm currently adding a little bit of Git functionality to my menu.vim file, and for using a certain command (Gitk) I need to find out Vim's current directory. [131082930030] |How does one do that and include it in a command? (i.e. :!echo "%current-directory") [131082930040] |I'll admit here that I asked the wrong question - but I figured it out. [131082930050] |I'm currently using these in my menu.vim: [131082940010] |I think either :pwd or getcwd() is what you are looking for. [131082950010] |Need Advice: What Linux distro should I install on an old PowerPC Mac [131082950020] |I'm trying to set up my brother (who has a PPC Mac, with 1 ghz processor and 256 ram) with a Linux distro that would allow him to surf the web on the device. [131082950030] |Support has faded for the new browsers, rendering the device essentially useless when it comes to the web. [131082950040] |Ideally I would have installed jolicloud, but alas, it is only Intel Mac compatible. [131082950050] |Which distros still continue support for PowerPC? [131082960010] |For 256MB of RAM you have to look at lightweight distros, or use a minimum install and build up as you need. [131082960020] |The PowerPC requirement may make it harder to find ready-made solutions, but if you are fine with a little work there are many options. [131082960030] |
  • Ubuntu has community support for PowerPC, but with that much memory you will have to use the minimal ISO, then install LXDE or XFCE.
  • [131082960040] |
  • Arch Linux has a port call PPC. [131082960050] |See the PowerPC installation guide.
  • [131082960060] |Remember, Linux is all about choice so there are more distros that can be suitable for you. [131082960070] |I'm recommending these two because I'm more familiar with them. [131082970010] |I run Ubuntu 10.4 on an old 1.3 GHz PowerPC G4 iBook with 1 gig of RAM as a secondary laptop to mess around with. [131082970020] |It runs OK with GNOME. [131082970030] |You're going to be hurting with only 256 megs of RAM so if there's any way to upgrade, I'd suggest getting more RAM. [131082970040] |PenguinPPC has a list of distros with PowerPC support. [131082970050] |Ubuntu and Debian are the ones I've used. [131082970060] |Arch Linux and Gentoo support PowerPC but are compiled from source which you'll probably want to stay away from on an older machine like yours. [131082970070] |Yellowdog is another one to consider, it is exclusively for PowerPC. [131082970080] |I personally like Ubuntu since its a desktop oriented distro, has stable release points (I run the LTS version), and offers a variety of desktop environments which is useful for older machines like yours. [131082980010] |How can I disable /proc pseudo-filesystem on a shared Linux environment? [131082980020] |Is it possible to disable the /proc pseudo-filesystem to prevent w, ps, top, etc, from showing what other users are doing? [131082980030] |Of course, I still want "everything else" to work correctly. [131082980040] |Is there a workable way to do this? [131082990010] |If you disable /proc, a lot of things will stop working. [131082990020] |Not only will you not be able to use ps and company, even to see your own processes, but a lot of tools and services will not be able to run. [131082990030] |Looking at the computer I'm writing this on, processes that have a file under /proc open include mdadm (RAID), Xorg (GUI), hald (hotpluggable devices), acpid (ACPI), rpc.mount (NFS server). [131082990040] |It might be workable to give /proc permissions 550 (i.e. non-world-readable), group proc-readers, and put all the services that use /proc but aren't running as root in the proc-readers group. [131082990050] |I've never tried; this is an experiment proposal, not something to do on a production machine. [131082990060] |Use SELinux or virtualization to further isolate users. [131083000010] |There is gsecurity patchset (included in SELinux but without horribly complicated MAC permission system) for Linux kernel which offer the option allowing only owner (and root) to see his/her processes. [131083000020] |It offer other goodies without being so intrusive as SELinux. [131083000030] |Similar option is on Solaris, or so I heard. [131083010010] |how to make PKG_CONFIG_PATH variable to refer an installed library? [131083010020] |I am in the process of installing the required libraries for FireFox 3.6 in the redhat linux nash 4.x system. [131083010030] |I already have successfully installed the glib2.12.0 library...but When i ./configure the atk 1.9.0 library i get the following error. [131083010040] |How can i add the path to the Environment variable ...? [131083010050] |Deeply appreciate any help. [131083010060] |Thanks :) [131083020010] |
  • If you can install from repository. [131083020020] |Check twice if you don't have it.
  • [131083020030] |
  • If you cannot try bundled tarball from firefox page.
  • [131083020040] |
  • Instead of installing all dependencies by hand try installing them from repository. [131083020050] |For sure GLib is in debian repo. [131083020060] |You need -dev/-devel or similar named packages
  • [131083020070] |
  • For this particular problem - you installed the packages in the something called prefix. [131083020080] |You can set this by ./configure --prefix=PREFIX and the default is /usr/local. [131083020090] |Hence you need to add PREFIX/lib/pkgconfig to PKG_CONFIG_DIR. [131083020100] |The exact method varies from shell to shell but the simplest option (for time of single session) is command export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
  • [131083020110] |As last piece of advice -DON'T install from source. [131083020120] |It is much more complicated that it look like and you will run into problems. [131083020130] |Look at the number of tools you have in Gentoo operating system (revdep-rebuild, lafilefixer etc.) to handle it. [131083020140] |You will be on your own and firefox/xulrunner using some parts in non-standard way will give helpful errors as XPCOM cannot start in case of SONAME mismatch. [131083020150] |You will have problems with uninstalling them as well and it may left garbage in system. [131083020160] |Usually uninstall scripts are not well-tested and even build one are written carelessly. [131083030010] |Allowing user to read only parts of NTFS filesystem [131083030020] |After installing ntfs-3g I have an option in nautilus to mount a Windows directory but I need to give root password. [131083030030] |While I have no objection to giving root password I would prefer to be restricted to permission of corresponding Windows user (i.e. disallowing modification of system files). [131083030040] |Is is easily achievable or do I need to post feature request? [131083040010] |Disclaimer: I did not try this, so it may or may not work; I don't have an NTFS volume around. [131083040020] |Mount the whole FS with permissions that prevent target users from reading it. [131083040030] |Mount a directory of the resulting tree at an accessible mount point with mount --bind and subsequent mount --o remount with different uid and umaks that allow target users to read it. [131083050010] |There IS a way to recognize Windows permissions on a ntfs-3g mount. [131083050020] |You have to create a user-mapping file. [131083050030] |See here. [131083050040] |This can be done from within Linux too, with the ntfs-3g.usermap utility. [131083050050] |See the manual pages for mount.ntfs-3g and ntfs-3g.usermap. [131083050060] |(I use Fedora 14.) [131083050070] |EDIT: I don't know what effect enabling this will have on Nautilus' mount feature. [131083050080] |Me, I like to mount the partitions in /etc/fstab and leave it at that. [131083060010] |"No protocol specified" when running vim with sudo [131083060020] |Recently I start to get "No protocol specified" when using sudo vim. [131083060030] |It's just a warning I guess, because everything was working normally (I can open, edit and save files). [131083060040] |The message doesn't appear if I use sudo -E vim so I think I did something wrong when editing /etc/profile recently, but I'm not sure. [131083060050] |How can I fix this? [131083070010] |According to this thread, there are two possible solutions to your problem: [131083070020] |Put the following line in my root users .bashrc script [131083070030] |then I copied .Xauthority to root also, i.e. [131083070040] |and now the warning is gone. [131083070050] |You could also try running via gksudo. [131083070060] |Anyway, both are worth a try... [131083090010] |NTFS (New Technology File System) is the standard file system of Windows [131083110010] |NTFS-3G is an open source cross-platform implementation of the Microsoft Windows NTFS file system with read-write support [131083120010] |How to make system automatically mount encrypted device in Linux [131083120020] |When I plug my external storage in , I need to automatically mount it as an encrypted device. [131083120030] |How do I make this happen ? [131083130010] |The issue with that is, for the system to automatically mount the encrypted device, the key for that device must be stored on the same system somewhere. [131083130020] |So, if your system is stolen, the key could be compromised. [131083130030] |If this is okay for you, then read on. [131083130040] |udev is the plug-and-play manager of Linux; anytime hardware is (dis)connected, it goes through udev and udev is responsible for putting it in the /dev directory somewhere or doing whatever needs to be done to make it recognized by the rest of Linux. [131083130050] |By digging into the depths of how udev works, you'll find it's possible to run a script when a USB mass storage device is connected. [131083130060] |Basically you'll need to go to /etc/udev/rules.d. [131083130070] |All files here are parsed by udev when it (re)starts, and these files can be used to fire off scripts when certain devices are connected. [131083130080] |Don't change anything you see here, but I added a z60_usbmount.rules with the following contents: [131083130090] |KERNEL=="sd*", BUS=="usb", ACTION=="add", RUN+="/etc/local/usbmountcheck udev add $kernel $parent" [131083130100] |KERNEL=="sd*", ACTION=="remove", RUN+="/etc/local/usbmountcheck udev remove $kernel $parent" [131083130110] |Thus when any external drive is attached via usb, that usbmountcheck script will run, with all the information udev gives up about the device. [131083130120] |The usbmountcheck script is a bit complicated, because you want to uniquely identify the drive, and the sda, sdb, etc. name, the $kernel name, won't do that. [131083130130] |Here's the bit of logic I included in my script to do that: [131083130140] |At this point $VNAME will have the device name as identified by USB. [131083130150] |You can then test if it's a known encrypted volume, and script the appropriate commands to mount it. [131083130160] |You'll also have to script an umount handler to automatically cleanup after a disconnect. [131083130170] |There's a lot of dangers in writing udev scripts because if they fail it could prevent udev from working and recognizing further hardware changes. [131083130180] |Tread with caution. [131083140010] |install scilab in suse 11.3 [131083140020] |i want to install scilab in suse 11.3. [131083140030] |Which repository is it?, I cannot find it in package manager. [131083140040] |I have tried to install the rpm manually but i cannot resolve the dependencies. [131083150010] |Wget command that returns [a list of] all webpages on a webserver [131083150020] |Let's say we have a website www.example.com with the following pages: [131083150030] |Is there a Wget command that would produce the following output: [131083150040] |How would this command change if the website structure was: [131083150050] |Essentially I have a server ip address and would like to be able to list all the pages held on the server, to see if there are any I am interested in to download. [131083150060] |For instance I can do wget -r www.example.com/page1 and I want to know if there are any other directories (page2, page3, etc) that I might be interested in. [131083150070] |I have researched the --spider option and others, but with no joy. [131083160010] |This is not possible. [131083160020] |There is no HTTP request method for that, a HTTP retrieve request gets always a particular URL as parameter. [131083170010] |You can tell wget to recursively download an entire website, but it does so by following the links on each page. [131083170020] |If it doesn't know page1, page2 and page3 are there, it will never retrieve them. [131083170030] |Put bluntly, HTTP does not work that way -- fortunately. [131083180010] |Like Rens and franziskus say, there is no way to do that from page1, the only chance will depend on how the website you want to copy is set-up. [131083180020] |It is unlikely in the root directory, but sub-directories (providing you know they exist) may be configured in such a way that they give you a list of files (some sort of visual ftp). [131083180030] |But you're up to exploiting what most webmasters are trying to hide away from you: the internals of their websites. [131083180040] |I have successfully exploited this to get to information I was confident was there, but could not find in any way through website navigation. [131083180050] |It only works with very few websites. [131083190010] |You can't do this from the client end but you can look for a site-map, sometimes the http://www.example.com/robots.txt file might contain a list. [131083190020] |There may be a way to ask Google for a list and there may be a last at the wayback machine. [131083200010] |How do I configure the GNOME gdm login screen? [131083200020] |I recently installed Arch Linux and have it mostly set up. [131083200030] |With many other Linux distributions, there is a tool that is used to configure the look and feel of the login screen. [131083200040] |I would like to change the wallpaper used, the date/time format from something like "Sat 6:27 PM" to "26 Feb 11 18:27", and the refresh rate of the monitor (it's fine when I'm logged in, but not at the login screen). [131083200050] |Where can I find these configuration options or (preferably) a GNOME application to allow for me to make these configuration changes? [131083210010] |The ArchWiki is a very good source of information. [131083210020] |This is where I found the following: [131083210030] |To configure the GDM theme use this command: [131083210040] |For more configuration options, use this command: [131083210050] |And modify the following hierarchies: [131083210060] |You may end up with an Xauth error. [131083210070] |If that happens, try gksudo instead of sudo. [131083210080] |If the error persists, you can do this: [131083210090] |This gives you the xauth cookie being used by your user. [131083210100] |Copy the output, then run the following, replacing "" with the output of the previous command. [131083210110] |This logs you in as the gdm user, adds the cookie, permitting gdm to use your display, and launches gnome-appearance-properties. [131083220010] |At least on my systems the default login provides a menu option to change the configuration from the login screen. [131083220020] |But the monitor refresh is not controlled by GDM, that is an X server configuration, edited in the xorg.conf file. [131083230010] |How to access web server running on Palm Pre from another machine? [131083230020] |I have a Palm Pre, and I've installed my webserver on it, it listens on port 8080. [131083230030] |It works, when I open 192.168.1.104:8080 on the Pre's browser (that's the IP address of the device, I've failed with localhost or 127.0.0.1), it shows images, everything is OK. [131083230040] |But I can't access the webserver from outside, e.g. from my desktop machine, it shows timeout. [131083230050] |Is there a firewall on the Pre, or what's wrong? [131083230060] |I can change server's port number, if necessary. [131083230070] |I didn't configured the Pre, just installed SSH. [131083230080] |Also, I can SSH to Pre, and from Pre to my desktop. [131083230090] |UPDATE: [131083230100] |ifconfig says: [131083230110] |iptables -nvL says: [131083230120] |I assume, TCP should be accepted for port 8080 or whatever I want, just as for 22 (SSH). [131083240010] |At the moment, incoming traffic is blocked unless explicitly allowed (that's what policy DROP means). [131083240020] |There are rules to allow “legitimate” traffic, for example the very first rule allows incoming ssh connections (tcp dpt:22 means TCP traffic to port 22, and that's the ssh port). [131083240030] |The manual way to enable a web server is add an iptables rule that allows incoming traffic to that port. [131083240040] |Let's say you want to open port 80. [131083240050] |You can to it with the following command: [131083240060] |(Note that I'm following the model of ports 3689 and 5353, which allow for things like rate control. [131083240070] |Ssh is handled specially, I guess to reduce the risk that a misconfiguration will make it inaccessible.) [131083240080] |There's probably a canonical way to have your settings applied at boot time. [131083240090] |Googling suggests that once you have a satisfactory setting, you can make it permanent with [131083240100] |I don't know WebOS, so I can't confirm the location of this file. [131083240110] |During your experimentations, if you mess up, you can delete a specific rule (e.g. the 42nd) with iptables -D INPUT 42. [131083240120] |Assuming the location above is correct, you can restore the boot-time settings with [131083250010] |What I can I do to make the transition to some new computer hardware safe and smooth? [131083250020] |I normally use machine A, and I make backups of A onto a fileserver B. Sooner or later, I will lose machine A for one reason or another. [131083250030] |Its hard drive wears out, or it gets hit by lightening, or some salesperson convinces me that it's an embarrassing piece of obsolete junk, or an overclocking experiment goes horribly wrong, or it suffers a "glitter-related event", etc. [131083250040] |Let's assume that computer C is totally different from computer A -- different mass storage interface, processor from a different company, different screen resolution, etc. [131083250050] |Is there an easy way to make a list of all the software currently installed on A before disaster strikes, in a way that makes it easy to install the same software on the blank hard drives of computer C? [131083250060] |Or better yet, makes it easy to install the latest versions of each piece of software, and the specific sub-version optimized for this particular machine C? [131083250070] |If I have plenty of space on B, it seems easiest copy everything from A to B. If I do that, what is a good way of dividing the files I want to copy from B to C from the files I don't? [131083250080] |I don't want to copy binary files I can easily re-download (and possibly re-compile) as needed, and probably wouldn't work on machine C anyway. [131083250090] |Or is it better in the long run to try to avoid backing up such easily-obtained machine-specific binary files onto B in the first place? [131083250100] |Is there a better way to reduce the chances that viruses and trojans get passed on to C and re-activated? [131083250110] |When I customize software or write fresh new software, what is a good way to make sure the tweaks I have made get backed up and transferred to the new machine and installed? [131083250120] |Such as cron and anacron tasks? [131083250130] |What I can I do to make my transition to some new computer C safe and smooth? [131083250140] |(This question expands on a sub-question of "Incremental system backup and restore w/ rsync or rdiff-backup issues" that I thought was particularly important). [131083260010] |All this depends on what package management system your distro uses. [131083260020] |If you're a debianish user you can use dpkg to get a list of installed packages. [131083260030] |Redhatesque users can use yum to get a list [131083260040] |For FreeBSD you can look in /var/db/pkg for a list of packages installed. [131083270010] |First go and read previous threads on this topic: Moving linux install to a new computer (about the same-architecture case), and How do I migrate configuration between computers with different hardware?. [131083270020] |Here I'm going to address a few minor points that weren't covered before. [131083270030] |If you're moving to a computer with the same architecture, and your disk hasn't died, just move the disk into the new machine. [131083270040] |This can be done completely independently of moving the data to a larger disk. [131083270050] |Note that “same architecture” here means the processor architecture type, of which there are only two in current PCs: x86-32 (a.k.a. i386, ix86, IA-32, …) and x86-64 (a.k.a. amd64, Intel 64, …). [131083270060] |Things like specific chipset or processor variant, video devices, storage interfaces, etc, don't matter here. [131083270070] |(If the storage interface is incompatible¹, or if one of the computers is a laptop, you'll have to find an adapter or copy across the network.) [131083270080] |For backing up in case your drive fails (it's one of the most fragile components), you have two choices: [131083270090] |
  • Make a bit-for-bit copy of the whole disk or partition. [131083270100] |Then you can restore directly, or even run from the backup in an emergency. [131083270110] |If that's your strategy, you'll still want a file-level tool for incremental updates.
  • [131083270120] |
  • Back up your files. [131083270130] |To restore, do a fresh install, then restore the files.
  • [131083270140] |Your default should be to copy everything, there are very few files that need changing when you move to a new computer. [131083270150] |You will have to reinstall the OS (with most current unices) if you move from a 32-bit PC to a 64-bit PC and you want to use the new PC with a 64-bit OS, but otherwise any bad experience you may have had from Windows does not carry over to Linux or other other unices. [131083270160] |To make it easier to ensure your data is on every computer you use (the old one and the new one, the family desktop PC and your personal laptop, etc.), make sure you customize things in your own home directory rather than at the system level. [131083270170] |In Ubuntu or other “user-friendly” terms, this means a customization method where you don't have to enter a password. [131083270180] |But do perform the customization at the system level if it's strongly hardware-dependent (e.g. screen resolution). [131083270190] |¹ This is largely hypothetical. [131083270200] |Most current desktop PCs still have IDE interfaces and are compatible with all general-public internal hard disks since the late 1980s. Surely you've already upgraded all your earlier PCs. [131083280010] |bit-for-bit backup [131083280020] |
  • Periodically, use "dd" to make a complete bit-for-bit copy of the /home partition on my work computer (or perhaps each and every partition) to a backup file(s) on my server. [131083280030] |(Is there some way to update last month's backup using something like rsync so I don't have to start from scratch every time, speeding this up?) (is there some way to do all or most this in the background, while I'm using my computer?). [131083280040] |
  • Put a liveCD in the working computer, and reboot
  • [131083280050] |
  • sudo dd if=/dev/hda | gzip -c | ssh -v -c blowfish davidcary@my_local_file_server "dd of=backup_2011_my_working_computer.gz"
  • [131083280060] |
  • keep all "my" files in my $HOME directory. [131083280070] |
  • text files, photographs, web browser bookmark file, etc. -- all in the home directory.
  • [131083280080] |
  • if I write a batch script that "needs" to go in some other subdirectory, keep the master copy somewhere in my $HOME directory, and make a soft link from that other subdirectory to the master copy.
  • [131083280090] |
  • If I write compiled software that "needs" to go in some other subdirectory, keep the master source code and Makefile in some subdirectory of my $HOME directory, and set up the Makefile so "make install" automagically installs the binary executable in that other directory.
  • [131083280100] |
  • If I fix bugs in some software, pass the bug fixes upstream.
  • [131083280110] |
  • keep a list in some text file in my $HOME directory of "apps I like and wasn't installed by default" and "apps that are normally installed by default but I don't like". [131083280120] |See How do you track which packages were installed on Ubuntu (Linux)? or How do you track which packages were installed on Fedora (Linux)?
  • [131083280130] |
  • if I purchase software on CD or DVD, back up the ISO image on my file server, and install from that image onto my work computer. [131083280140] |(Because my work computer doesn't have an optical drive).
  • [131083280150] |Later, when machine A is lost, [131083280160] |
  • install the latest version of whatever distro is my favorite this week, including all the default software, on machine C.
  • [131083280170] |
  • On the file server B, use "mount" with the "loop device" to allow read-only access the individual files stored inside that backup file. [131083280180] |(For more information on creating and mounting a read-only compressed disk image, see http://superuser.com/questions/254261/compressed-disk-image-on-linux )
  • [131083280190] |Alas, the user number of "davidcary" on my work computer is different from the user number of "davidcary" on my file server -- so it appears that all these files are owned by some other user. [131083280200] |Is there a way to fix this, or to prevent it in the first place? [131083280210] |
  • copy my /home/ directory from that backup file to the /home/ directory of my new work machine, somehow (?) skipping over all binary executables. [131083280220] |This blocks some kinds of viruses and trojans from spreading to C.
  • [131083280230] |
  • setup other stuff outside my home directory: [131083280240] |
  • check my list of apps, uninstall stuff I don't want.
  • [131083280250] |
  • check my list of apps, install the latest version of stuff I want. [131083280260] |Hopefully the latest version includes the bug fixes I've passed back. [131083280270] |(See links above for ways to automate this process)
  • [131083280280] |
  • do "make superclean" and "make install" with each of the compiled programs I've written.
  • [131083280290] |
  • somehow (?) remember where the batch scripts "need" to go, and create a soft link from that location to the master source in my /home/ directory. [131083280300] |(Is there a way to automate this?)
  • [131083280310] |
  • somehow (?) remember all the stuff I have running as cron and anacron jobs, and enter them back in again.
  • [131083280320] |
  • install software that was purchased on CD from the ISO image on the file server.
  • [131083280330] |
  • ... is there anything else I'm missing?
  • [131083290010] |$HOME in version control [131083290020] |
  • Periodically commit everything in the /home directory to the version control repositories. [131083290030] |(Except always do "make superclean" just before committing the programmer's $HOME directory, so he never commits binary executables or other easily-machine-generated files).
  • [131083290040] |
  • For each user that "owns" unique data on my work computer, make sure there is some sort of version control repository on my file server that contains that user's entire $HOME directory ("$HOME in subversion"). [131083290050] |Even though I'm practically the only human that touches this keyboard, I have a separate user for: The untrusted web browser who likes to install lots of potentially malware-infected games; the C programmer who often writes software with horrific bugs and so we want to keep him isolated in a sandpile where he can't accidentally delete my favorite web bookmarks; the robot user that runs the wiki; the root user; etc.
  • [131083290060] |
  • keep all "my" files in my $HOME directory, which is a version control working directory. [131083290070] |
  • text files, photographs, web browser bookmark file, etc. -- all in the home directory.
  • [131083290080] |
  • if I write a batch script that "needs" to go in some other subdirectory, keep the master copy somewhere in my $HOME directory, and make a soft link from that other subdirectory to the master copy.
  • [131083290090] |
  • If I write compiled software that "needs" to go in some other subdirectory, keep the master source code and Makefile in some subdirectory of my $HOME directory, and set up the Makefile so "make install" automagically installs the binary executable in that other directory.
  • [131083290100] |
  • If I fix bugs in some software, pass the bug fixes upstream.
  • [131083290110] |
  • keep a list in some text file in my $HOME directory of "apps I like and wasn't installed by default" and "apps that are normally installed by default but I don't like". [131083290120] |See How do you track which packages were installed on Ubuntu (Linux)? or How do you track which packages were installed on Fedora (Linux)?
  • [131083290130] |
  • if I purchase software on CD or DVD, back up the ISO image on my file server, and install from that image onto my work computer. [131083290140] |(Because my work computer doesn't have an optical drive).
  • [131083290150] |Later, when machine A is lost, [131083290160] |
  • install the latest version of whatever distro is my favorite this week, including all the default software, on machine C.
  • [131083290170] |
  • For each user, do a version control checkout of the latest version (HEAD), except somehow (?) skipping over all binary executables. [131083290180] |This blocks some kinds of viruses and trojans from spreading to C.
  • [131083290190] |
  • setup other stuff outside my home directory: [131083290200] |
  • check my list of apps, uninstall stuff I don't want.
  • [131083290210] |
  • check my list of apps, install the latest version of stuff I want. [131083290220] |Hopefully the latest version includes the bug fixes I've passed back. [131083290230] |(See links above for ways to automate this process)
  • [131083290240] |
  • do "make superclean" and "make install" with each of the compiled programs I've written.
  • [131083290250] |
  • somehow (?) remember where the batch scripts "need" to go, and create a soft link from that location to the master source in my /home/ directory. [131083290260] |(Is there a way to automate this?)
  • [131083290270] |
  • somehow (?) remember all the stuff I have running as cron and anacron jobs, and enter them back in again.
  • [131083290280] |
  • install software that was purchased on CD from the ISO image on the file server.
  • [131083290290] |
  • ... is there anything else I'm missing?
  • [131083300010] |life in a virtual machine [131083300020] |
  • Set up a virtual machine on my work computer. [131083300030] |Do all my real work inside that virtual machine.
  • [131083300040] |
  • Periodically pause the virtual machine, and backup the virtualized disk and virtual system state to the file server. (is there some way to do all or most this in the background, while I'm using my computer, so I only need to pause long enough to back up the last few things?).
  • [131083300050] |Later, when machine A is lost, [131083300060] |
  • install some convenient host operating system onto the new work machine C.
  • [131083300070] |
  • install virtual machine player onto work machine C.
  • [131083300080] |
  • Copy the virtualized disk file and virtual system state file from the file server to machine C.
  • [131083300090] |
  • Run the virtual machine player to un-pause that virtual machine.
  • [131083300100] |Alas, now C is running all the viruses and trojans that A has collected -- is there a way to block at least some of them? [131083310010] |Assuming you are using Debian-like Linux [131083310020] |
  • periodically on machine A run: [131083310030] |and keep backup.pkg.lst file in a safe place [131083310040] |
  • When disaster happens, do a minimum install on C machine (or A) (even without GUI) and run as root: [131083310050] |and restore your /home directory from a backup [131083320010] |Fast way to build a test file with every second listed in YYYY-mm-dd HH:MM:SS format [131083320020] |I want to create a large test file with lines containg dates listed by the second, but my method is taking inordinately long... (or at least, that's how it feels :) ... [131083320030] |43 minutes to create only 1051201 lines. [131083320040] |20.1 MB file.... [131083320050] |I want to crate a much bigger file, with each line's date being unique.. [131083320060] |Is there a faster way than how I've approached it?: [131083330010] |This script generates a 10 million line 201Mb file in 7m50.0s on a VM I have handy. [131083330020] |That's about 1.5Gb/hr. [131083340010] |I haven't made any benchmark, but I see a few potential improvements. [131083340020] |You open and close the file for each call to date. [131083340030] |This is a waste: just put the redirection around the whole loop. [131083340040] |You're making separate calls to date for each line. [131083340050] |Unix is good at calling external programs quickly, but internal is still better. [131083340060] |GNU date has a batch option: feed it dates on standard input, and it pretty-prints them. [131083340070] |Furthermore, to enumerate a range of integers, use seq, it's likely to be faster than interpreting the loop in the shell. [131083340080] |Generally speaking, if your shell script is too slow, try to have the inner loop executed in a dedicated utility — here seq and date, but often sed or awk. [131083340090] |If you can't manage that, switch to a more advanced scripting language such as Perl or Python (but the dedicated utilities are typically faster, if you fit their use cases). [131083350010] |We know it's slow from running: [131083350020] |(and that's a version that only prints 4 hours, not 2 years.) [131083350030] |To get a better understanding of where bash is spending its time, we can use strace -c. [131083350040] |So we can see that the top two calls are waitpid and clone. [131083350050] |They don't take up much time on their own (only 0.128906 seconds and 0.029241 seconds), but we can see they are being called a lot, so we are suspecting the problem is the fact we are having to start a separate date command to echo each number. [131083350060] |So then I did some searching, and found out you can compile bash with gprof support by doing: [131083350070] |Now using that: [131083350080] |So assuming the function names are meaningful, it confirms that the problem is we are making bash fork and call an external command repeatedly. [131083350090] |If we move the >>to the end of the while loop, it barely makes a dent. [131083350100] |But Gilles' answer finds a way to only run date once, and not surprisingly, it's much faster: [131083350110] |7 waitpids and 4 clones compared to 28806 and 14403 in the original! [131083350120] |So the moral is: If you have to call an external command inside a loop that is repeated many times, you either need to find a way to move it out of the loop, or switch to a programming language that doesn't have to call an external command to do the work. [131083350130] |As requested, a test based on Iain's method (modified to use same variable names and looping): [131083350140] |Results: [131083350150] |So close and open are showing up. [131083350160] |Now Eelvex's observation about >>per line versus >around the while loop starts to make a difference. [131083350170] |Let's factor it out... [131083350180] |Which is also much, much faster than the original script, but slightly slower that Gilles'. [131083360010] |What happens when I open and read from /proc? [131083360020] |Does the hard drive need to be accessed or is everything done in memory. [131083360030] |Basically I would like to constantly get updated values from meminfo and cpuinfo. [131083360040] |Do I need to reopen the file and then reread in order to get an updated value or can I just reread? [131083360050] |I don't have access to a Linux install at the moment. [131083370010] |The /proc filesystem is a so-called "pseudo filesystem", meaning that (afaiu) there is no disk usage. [131083370020] |I'm not quite sure how this works at the lowest level, so I may be wrong, but here goes. [131083370030] |If I run [131083370040] |I get two different outputs. [131083370050] |Afaik, seek(0) only resets the read offset and it doesn't re-open the file. [131083370060] |This suggests that re-reading a file is enough to get the new values. [131083370070] |That said, if you are developing for Linux, getting permanent access to Linux box seems a pretty logical thing to do... [131083380010] |When you read from /proc, the kernel generates content on the fly. [131083380020] |There is no hard drive involved. [131083380030] |What you're doing is similar to what any number of monitoring programs do, so I advise you to look at what they're doing. [131083380040] |For example, you can see what top does: [131083380050] |The trace shows that top opens /proc/uptime, /proc/loadavg, /proc/stat and /proc/meminfo once and for all. [131083380060] |For all these files except /proc/uptime, top seeks back to the beginning of the (virtual) file and reads again each time it refreshes its display. [131083380070] |Most of the data in /proc/cpuinfo is constant, but a few fields such as the CPU speed on some machines are updated dynamically. [131083380080] |The proc filesystem is documented in the kernel documentation, in Documentation/filesystems/proc.txt. [131083380090] |If you get desperate about some esoteric detail, you can browse the source. [131083390010] |The Unix filesystem is a single hierarchy, which Linux represents with its VFS subsystem. [131083390020] |You mount filesystems on some nodes of the tree, for example when you plug a usb key. [131083390030] |When you try to read a file the VFS looks which filesystem is mounted here, and forwards the request to the appropriate module. [131083390040] |Some of the filesystems are backed by disk io, some represent kernel datastructures (/proc, /sys, debugfs, cgroups…), some are network based, some are memory-based (tmpfs), some are implemented in userland via FUSE, and can be backed by crazy stuff like databases, VCSes, ssh, archives and so on. [131083400010] |The files are not stored on disk, but they are hooks to the kernel. [131083400020] |When you open a file (using fopen()), the kernel handles this job. [131083400030] |It walks through the mountpoints, finds the apropriate driver to handle the request, and hands the task to that driver. [131083400040] |In the case of /proc, the file read request is passed to the internal "proc" system in the kernel. [131083400050] |At the moment you read the file, it returns the value from memory. [131083400060] |A similar pattern also happens with the files in /dev. [131083400070] |The kernel sees you open a dev-node with a certain device ID, and associates the IO stream with a particular driver that handles the request. [131083400080] |Basically I would like to constantly get updated values from meminfo and cpuinfo. [131083400090] |You can read the proc filesystem to read these values, or see if there are maybe other syscalls you can use for it. [131083400100] |It will be a polling mechanism nevertheless, so there is always a certain system load involved. [131083410010] |The /proc filesystem is a pseudo-filesystem. [131083410020] |It is a convenient way of transferring memory from user space to kernel space and vice versa. [131083410030] |Each entry (file or directory) in the /proc directory is created by a part of the kernel. [131083410040] |Each entry can be read and/or write. [131083410050] |They can be opened from the userspace like any normal file. [131083410060] |Entries are created in roughly the following manner (inside a kernel module): [131083410070] |So you roughly specify the path to be created, and functions that are to be called on read/write (You need one or both of them). [131083410080] |The read function returns a string(like a file read call), while the write function takes a string. [131083410090] |The corresponding read and write functions are called, whenever a program tries to read/write to the corresponding proc file path. [131083420010] |OpenCV and Cygwin configuration [131083420020] |Hello, [131083420030] |I'm trying to configure OpenCV-2.2.0-win32-vs2010 with Cygwin to work together. [131083420040] |Any ideas on how can I do that? [131083420050] |Thanks [131083430010] |(Much of this is covered by the election stats page) [131083430020] |I'm a pro-tem mod currently, and rather like doing it. [131083430030] |I've been on the site since the private beta, and I'm here pretty continuously throughout the day (1 of 8 fanatics, and just over 200 consecutive days now (double fanatic?)), so I handle most of the flags that come in. [131083430040] |I like the maintenance side of Stack Exchange sites. [131083430050] |I'm a copy editor on SO, and about 50 edits away here (1 of 5 with strunk &white). [131083430060] |I spend most of my time in the miscellaneous quality tools: the review page, flagged posts, posts with close/delete votes (so often empty that I'm tempted to start ignoring that one), tag synonyms, suggested edits, migrated posts, etc. [131083430070] |I'm on our meta (1 of 6 with convention), but activity there is pretty limited compared to the other sites. [131083430080] |I'm also active on the main meta as well, so I generally know when policies have changed or new features have come out. [131083430090] |I'm on chat, for what that's worth; it's at least easy to get my attention if you need something or have a question. [131083430100] |I have the second highest rep here, 8k, and 40k network-wide, although rep don't mean much when it comes to choosing a mod other than perhaps indicating how active they are [131083430110] |Sites that have more nominees than positions (which includes us now) have "town hall chats" so nominees can answer questions, so if you have any questions you can comment here or wait and ask them then [131083440010] |What is the best touch screen kiosk solution? [131083440020] |I'm a web developer and I'm looking for the best stripped-down linux experience that provides a full-screen experience for touch screens. [131083440030] |I'll be looking for: [131083440040] |
  • a robust experience that stops people loading up porn or logging out
  • [131083440050] |
  • booting from USB, CD, HDD or all three if possible
  • [131083440060] |
  • remote admin (probably via VPN)
  • [131083440070] |
  • complete screensaver and power-save control
  • [131083440080] |
  • the ability to hide cursors
  • [131083440090] |
  • an onscreen keyboard, usually web-based for the customer but a native version for admin
  • [131083440100] |
  • the ability to autorun 3D fullscreen apps such as Blender games
  • [131083440110] |
  • possibly multi-touch functionality in the future but this isn't a deal-breaker
  • [131083440120] |My favourite options at the moment are: [131083440130] |
  • SLAX
  • [131083440140] |
  • Webconverger
  • [131083440150] |This kiosk project looks worth using too. [131083440160] |Could I get some recommendations for the best configuration please? [131083440170] |Thanks [131083450010] |How to create a new file and edit it as root? [131083450020] |I've tried: gksudo gedit /etc/xinetd.d/tftp (this is the path i need). [131083450030] |I got an error message: [131083450040] |What do i need to do? [131083460010] |My version of gedit (2.22.3) does create the file if it doesn't exist. [131083460020] |If yours doesn't, you can create the file first with sudo touch /etc/xinetd.d/tftp. [131083460030] |I recommend using sudoedit /etc/xinetd.d/tftp instead. [131083460040] |This uses your favorite editor, taken from the EDITOR environment variable, or a system default if you haven't expressed your preference. [131083460050] |Add export EDITOR=/usr/bin/gedit to your ~/.profile if necessary. [131083460060] |See also How do I run a command as the system administrator (root). [131083470010] |How to disable certain system calls for a given user? [131083470020] |I am creating a web app similar to codepad and for each run-action my app copies a directory (/home/radeks/voidptr/private/chroot-root) to /tmp/voidptr/[random-id]/chroot-root. [131083470030] |This chroot directory has a user that compiles and runs the code entered by the web app user. [131083470040] |The problem is that I don't want the user to shut down the machine or use sockets for example. [131083470050] |Do I need to write a supervisor or can I simply set these permissions per user? [131083480010] |Chroot only restricts filesystem access. [131083480020] |If you have root permissions, a chroot is merely an inconvenience, not a secure confinement. [131083480030] |Ordinary users can use sockets but not shut down the machine. [131083480040] |There are ways to restrict what users can do: capabilities, AppArmor, SELinux, … But by far the easiest way is to confine the webapp user to a virtual machine. [131083480050] |You will easily be able to restrict things like network connectivity and resource usage. [131083480060] |For your use case, a lightweight Linux-on-Linux vm technology seems best: something like OpenVZ, user-mode Linux or Vserver. [131083490010] |ssh-agent error on ubuntu server 10.04 LTS [131083490020] |Hi everybody, I have an Ubuntu Server 10.04 installation on Linode. [131083490030] |I am trying to use ssh-agent to stop typing my passphrase everytime I need to push some changes to GitHub. [131083490040] |I am using the script provided by GitHub here. [131083490050] |But when I source my .zshrc file or when I logout and log back in I get the following error message: [131083490060] |Does anyone know what the the problem might be and how I could fix it please? [131083500010] |Your .zshrc may be modifying the DISPLAY variable, or modifying access to the screen. [131083500020] |Normally ssh-agent is started when you start gnome. [131083500030] |The variable SSH_AUTH_SOCK is set to the socket of your ssh-agent. [131083500040] |If this variable is set, then ssh-agent will try to open window to get the password for your key when it is needed. [131083500050] |If your display is not accessible you can use ssh-add to add the key from the command line. [131083500060] |Do this after you reboot. [131083500070] |You also need to repeat this if your key ages out. [131083510010] |date is a command line utility for printing or setting the system date and time. [131083510020] |The base date (epoch) used for calculations and conversions is 00:00:00 UTC on 1 January 1970. [131083510030] |Date can also be used to print a given date and time in a variety of formats e.g. [131083510040] |date outputs the current time e.g. Sun Feb 27 16:31:59 GMT 2011 date '+%Y-%m-%d %H:%M:%S' outputs 2011-02-27 16:33:22 date -d @1298824473 adds @ to output Sun Feb 27 16:34:33 GMT 2011 `date -s "Sun Feb 27 16:34:33 GMT 2011" with suitable privilege will set the system date. [131083520010] |Date is a command line utility for printing or setting the date and time [131083530010] |I am nominating myself because nominating others is not possible. [131083530020] |Look at my questions, answers and comments to get an idea of my activities at unix.stackexchange. [131083530030] |Until now I am not active at the meta page - but perhaps this will change if there is a need. [131083530040] |I agree with xenoterracide that cross-plattform related questions are on-topic here. [131083530050] |I want to help that unix.stackexchange keeps as open as possible to Unix/Linux related questions. [131083540010] |Any one here used Plan9 [131083540020] |Does any one here tried the Plan 9 by bell-labs which is a successor to Unix[according to wikipedia] if yes how was your experience.? [131083550010] |"bad mirror archive" : What should I put as the "mirror of ubuntu archive" on installation of ubuntu? [131083550020] |I'm trying to set up ubuntu from my old macintosh laptop, but when it asks me to put in the "mirror of ubuntu archive" I run into problems. [131083550030] |For the mirror, I put in "mirror.anl.gov" for the ubuntu archive mirror directory, i put in "pub/ubuntu/" Then I leave the http proxy blank and hit "continue" [131083550040] |It then tries to "download release files", which starts at 0%, and eventually jumps straight to 100% and then it presents me with an error of "bad mirror archive". [131083550050] |Any advice? [131083550060] |I never set up ubuntu before, and I'm really struggling with this step. [131083560010] |What does the name of the unix command apropos mean? [131083560020] |Apropos is a tool to search the headers of the man pages for a string. [131083560030] |What does the name apropos mean? [131083570010] |I always thought it is from à propos meaning "in connection, concerning, with regard, in reference". [131083570020] |Edit: Apparently there is also word apropos and à propos meaning the same which are derived from French à propos. [131083580010] |Definition #4 is where the unix command stems from. [131083580020] |The results returned are in reference to the input argument. [131083590010] |On my system, dictapropos tells me: [131083600010] |In computing, apropos is a command to search the manual pages files in Unix and Unix-like operating systems. [131083610010] |In English apropos is a word that means appropriate or at an opportune time. [131083620010] |Also in French version of most software, the "About" item is translated by "À propos" which is a fairly accurate translation. [131083630010] |qmake looking for lib files named *.pc [131083630020] |I'm trying to compile some software (FocusWriter) on openSUSE 11.3, (linux 2.6.34.7-0.5-desktop). [131083630030] |(I can't find an actual download link to the alleged openSUSE RPM...just lots of metadata about the RPMs). [131083630040] |So I unpacked the source from git, and, following instructions, ran qmake. [131083630050] |I get this: [131083630060] |I know that all those packages are in fact installed, according to both YaST and zypper. /usr/lib64/ contains files such as libao.20.2 and libzip.so.1 and libzip.so.1.0.0 -- but nowhere on the harddrive can I find anything called ao.pc, hunspell.pc, or libzip.pc. [131083630070] |Any suggestions what I'm missing here? [131083630080] |Thanks. [131083640010] |You have the user libraries installed, but you also need to install the developer libraries and header files. [131083640020] |Taking ao as an example: [131083640030] |The normal user package includes files like: [131083640040] |whereas the developer package include files like: [131083640050] |And it's the second set of files you're missing. [131083640060] |I'm not familiar with SUSE's YaST2, but the commands should look something like yast2 --install libao-devel. [131083640070] |And the same for the other packages of course. [131083640080] |One way to double check the name of the RPM to install is to go to rpmfind.net and paste one of the missing file names in, e.g. /usr/lib/pkgconfig/ao.pc. [131083640090] |It will give you a list of RPMs: look for the OpenSUSE 11.3 one and use that name when running yast2 --install. [131083640100] |UPDATE [131083640110] |According to Using zypper to determine what package contains a certain file, you can use zypper rather than needing to use rpmfind.net. [131083640120] |Try this: [131083640130] |(untested) [131083640140] |Also, on an RPM-based system, you might find it better to try searching for an RPM .spec file, and build using that. [131083640150] |I found a focuswriter spec file on the OpenSUSE web site. [131083640160] |Then if you build using rpmbuild, it should give you an error telling you which packages you still need to install so you can build it. [131083640170] |This also has the advantage of giving you an RPM you can easily install, upgrade, and uninstall, which uses the SUSE recommended build options. [131083650010] |Xen/KVM/LXC for testing packages [131083650020] |On Debian Stable, I would like to be able to create a new instance of the OS, use apt-get to install some Unstable packages with dependencies, then cleanly delete the whole thing when I'm done. [131083650030] |VirtualBox or QEMU would work, but Xen/KVM/LXC seem to be lighter and faster. [131083650040] |How do they compare for this use? [131083650050] |Edit: To clarify, in this case, I want to set up to be able to install-use-remove dangerous things without messing up the base system. [131083650060] |Looking for what would be most lightweight/fast. [131083660010] |For this kind of use, I'd go with a specialized Linux-on-Linux virtual machine technology (as opposed to a more general technology such ax Xen, KVM, VirtualBox or Qemu): LXC, OpenVZ, user-mode Linux, Vserver… [131083660020] |You could even use a chrooted installation. [131083660030] |The schroot package is convenient for this. [131083670010] |If you just want to test dependencies, pbuilder (or cowbuilder, which adds COW and is slightly faster to launch), a chroot environment tuned for building packages, would work very well. [131083670020] |If you want to handle untrusted packages, you'll need LXC or full virtualisation. [131083670030] |LXC takes some configuration, but can be handled by libvirt if you want a high level of isolation; you still need to debootstrap it yourself as I recall. [131083670040] |For full virtualisation, vmbuilder has a debian version that prepares and configures images. [131083670050] |Since you don't need the flexibility of LXC, I recommend pbuilder or vmbuilder + kvm. [131083690010] |Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime [131083710010] |A computer may have two (or more) operating systems installed on it; dual-booting allows one to be chosen during the boot process [131083730010] |The use of Unix in embedded computer systems such as networking equipment, mobile phones, media players, set-top boxes, etc [131083750010] |man - format and display the on-line manual pages [131083770010] |Open Source Software for running Windows applications on other operating systems [131083780010] |Default configuration I need to change? [131083780020] |A few days ago I installed a new linux os. [131083780030] |Today i realize /root has o+r (755) so EVERYONE is able to see my root sql password in /root/.my.cnf. [131083780040] |I freaked out and simply changed /root to 750. [131083780050] |My /var/www folder is 2755 but all the folders in it are 2750 (so certain users can browse to the folder without being blind). [131083780060] |What software, file permissions and other DEFAULT configuration should I change? [131083790010] |perhaps you should do a scan of your system with a tool like tiger. [131083790020] |Tiger will pick up lots of things like this, and is a great way to get lots of advice and suggestions about how to secure your system. [131083790030] |Tiger can also be useful as a kind of Intrusion Detection System, too. [131083800010] |Help configuring DBUS to start JACK [131083800020] |I have the Jack Audio Connection Kit (JACK) installed, but cannot seem to get jack_control start to start the service. [131083800030] |I'm using Slackware64-current, which recently updated its /etc/dbus-1/system.conf to have a more restrictive configuration: [131083800040] |Ever since the update, running jack_control start as a regular user produces the following error: [131083800050] |It did not do this before. [131083800060] |The new configuration file says I'm supposed to punch a hole for it in the service configuration files, but I'm baffled as to how I'm supposed to do that. [131083800070] |I'm not even quite sure what DBUS has to do with JACK. [131083800080] |If you could please help me by giving me a little background information or lending me your expertise regarding configuration of DBUS, I would be most grateful. [131083800090] |Extra information: [131083800100] |
  • JACK2 SVN revision 4120 (2011-02-09)
  • [131083800110] |
  • DBUS version 1.4.1
  • [131083800120] |
  • DBUS-Python version 0.83.1
  • [131083810010] |What governs the limits of shell brace expansion? [131083810020] |In this example I refer to the expansion of a sequence of integers, but perhaps(?) the limits would be relevant to all aspects of brace expansion.. [131083810030] |This more general view is also of interest to me. [131083810040] |seq seems to handle much longer integer sequences than does {1..n} brace expansion (at least, that's the case in this example). [131083810050] |eg 'seq -f @%12.0f 1 1000000000 >/dev/null' .. [131083810060] |This expands 1 billion quite happily in 14m 04s [131083810070] |However, echo {1..10000000000} >/dev/null crashes into oblivion from the CLI in 'gnome-terminal' and 'konsole' (...goodbye terminal session!) [131083810080] |The best I could get out of brace expansion for an integer sequence, is approximately {1..15000000}.. only 15 million. [131083810090] |Is this a limitation of the brace expansion itself, or of how echo handles the expanded data? [131083810100] |It seems to be caused by using up all available RAM, but I thought it would use the swap area at that point... [131083810110] |Also (btw), this 15000000 integer sequence, echo {..} takes 57.0s; whereas seq takes only 12.7s ... [131083820010] |I guess this expansion is not designed to be used that way. [131083820020] |The crash indicates a bug, surely, but rarely triggered one. [131083820030] |How practical do you think is it to feed billion of consecutive integers to anything? [131083830010] |echo {1..5} is expanded into the command echo 1 2 3 4 5 which is then expanded in the usual way. [131083830020] |It is not at all similar to seq 1 1000000000 >/dev/null, which never expands to a command with very many arguments. [131083830030] |It is more like echo $(seq 1 1000000000): I guess this breaks in the same way? [131083830040] |The problem you are running into is to do with handling large commands, which Unix has always been fussy about, which is to say it is a general problem with handling command strings. [131083830050] |It is one of the things Perl was written to fix. [131083830060] |I'd file a polite and informative bug report anyway: it might provoke an interesting discussion. [131083840010] |grub error "superfluous RAID member" on update-grub [131083840020] |Hi, [131083840030] |first let me explain: My system (wheezy/sid: 2.6.32-5-amd64 #1 SMP x86_64 GNU/Linux) is running fine - I'm just wondering on these messages... [131083840040] |I have two mdraid-arrays running up on my system - see cat /proc/mdstat [131083840050] |I use /dev/md0 as / and /dev/md1 as /home. [131083840060] |Whenever update-grup is running - or while grub comes up, I receive the error: error: superfluous RAID member (2 found).. [131083840070] |To understand, why I'd like to remove this, have a look at the result of update-grub: [131083840080] |I'm interested in removing this messages. [131083840090] |Thanks in advance and greets. [131083850010] |This is a bug on the update-grub script. [131083850020] |After what is said in the Debian bug report, a patch has been applied upstream so it should be fixed in the Debian package at some time. [131083860010] |Custom X application RCNG startup script woes [131083860020] |I am attempting to build a FreeBSD + XBMC based media box. [131083860030] |Everything is all working fine except for one point. [131083860040] |I need to boot XBMC as an RCNG startup script, which is all fine, except: [131083860050] |If the command to start X and XBMC is run in the foreground all works fine. [131083860060] |If it is pushed to the background (with &) it starts to work then is kicked out (I think) by getty starting. [131083860070] |Is there some way of stopping getty from killing X, or am I barking up completely the wrong tree? [131083860080] |

    rcng script:

    [131083860090] |One more note - when it does kick it out the screen is screwed. [131083860100] |It shows me the first bit of the X.Org startup messages and that's all. [131083860110] |No login, no control over it, no ability to start X again even remotely. [131083860120] |I have now turned off ttyv2 and upwards in the /etc/ttys - it has stopped the screen from locking up when it kicks out Xorg (gives me a normal getty prompt), but it still kicks it out. [131083860130] |So it's deffinately getty / init related. [131083860140] |Ok, I am 100% convinced it's to do with getty starting up. [131083860150] |If I put the commands to start the x session in a script with a sleep 5 in it so the X session doesn't actually start until after getty has started running it all works fine. [131083860160] |While I can live with this for now, it would be nice to understand why it behaves like this and maybe get it to start up more friendly. [131083870010] |I haven't been here long, so I wasn't going to nominate myself, but I notice we have a shortage of candidates, so I will put myself forward after all. [131083870020] |I'm active on several Stack Exchange sites, including this one, Server Fault, and the photography site. [131083870030] |I haven't hit 2000 yet here, but will soon. [131083870040] |I think my contribution here has been valuable and will continue to be. [131083870050] |I'd like to see this site develop into the best place for all Unix and Linux questions, and particular to handle the not-large-sysadmin-environment questions that often hit Server Fault. [131083870060] |It's great so far, but a little small — there's not many high-reputation users yet, and hopefully we can attract more experts. [131083870070] |I'd like to use a light touch on editing and moderating, with more of a focus on categorization and linking of similar concepts. [131083870080] |I'd also like to see the tag wikis become helpful jumping-off points for common topics. [131083870090] |I've personally been using Linux and Unix professionally since 1995, and have a Linux-related sysadmin/systems architect job supporting academia. [131083870100] |I generally know my stuff, but am very happy to learn something new and to be corrected when I'm wrong. [131083880010] |SpamAssassin Under Linux VPS [131083880020] |I am getting bald with this guys: [131083880030] |On a Linux VPS (CentOS), I have setup a mail server (Exim+Dovecot+Clamav+SpamAssassin) but scanning mail for spam using SpamAssassin is proving to be a challenge. [131083880040] |I am not a Linux expert, especially when it comes to challenges associated with VPS, but I can pretty much RTFM and succeed. [131083880050] |This one is beating me: [131083880060] |On my FreeBSD servers, I can easily get the desired results: [131083880070] |However, on the Linux VPS when I run the same test, it just hangs there ... nothing responds on the port, even though I can see spamd listening on 783: [131083880080] |If any of you runs their servers on VPSes and are able to communicate with spamd, please tell me where to look. [131083880090] |Af it might help, I can connect to the POP3 port on this server from remote locations, BUT not from localhost (127.0.0.1)! [131083880100] |Since spamd is set to listen on 127.0.0.1, I cannot test it from a remote server, and I don't intend to change it to listen on the public IP for test purposes. [131083880110] |Could it be something I need to change on the Linux? [131083890010] |Can you please let us know what VPS technology the ISP is using? [131083890020] |I have run up against similar problems running some software under an Ubuntu VPS in the XEN environment. [131083890030] |The VPS was not getting the resources that were required to allow the software to run properly. [131083900010] |What aspects of Plan 9 have made their way into Unix? [131083900020] |Plan 9 was developed by Bell Labs as a successor to Unix. [131083900030] |Although for various reasons it never quite materialized as such, a fair amount of development still went into Plan 9. [131083900040] |My question is, what - if anything - from Plan 9 has made its way into modern Unix? [131083910010] |The /proc filesystem in Linux is a Plan 9 idea. [131083920010] |The clone(2) system call in linux is said to have been modeled after Plan 9's rfork() (http://news.ycombinator.com/item?id=863939, I personally don't see how the timing works out). [131083920020] |This paper: http://www.kernel.org/doc/ols/2006/ols2006v1-pages-101-112.pdf claims that Plan 9 inspired the "mount/filesystem namespace". [131083920030] |The /proc filesystem appears to have come to Plan 9 from 8th Edition Unix: http://en.wikipedia.org/wiki/Procfs , rather than the other way around. [131083930010] |The obvious one is probably UTF-8. [131083930020] |But that's probably too obvious. [131083930030] |Al Viro's grand re-architecturing of the Linux VFS is heavily inspired by Plan9. [131083930040] |Especially the shift from "Everything Is A File" to "… And Every File Is A Mount Point". [131083940010] |Union file systems, such as unionfs and aufs, were inspired by Plan9 union directory mounts. [131083940020] |For example, they are used on live CDs to merge /usr/bin from the CD with a writable file system, so that you can make changes to /usr/bin, even tho the CD is read-only. [131083940030] |Union file systems: Implementations, Part I on lwn.net [131083940040] |For example, if I understand the docs correctly, on Plan9, you could do: [131083940050] |And all the files in all three directories would appear in /bin (in case of duplicate names, the one in the last-specified directory wins, due to the -b option). [131083940060] |I'm not sure if this is what Bruce means by "mount/filesystem namespace", or is something different. [131083940070] |You could probably also say that sshfs was inspired by Plan9's ftpfs. [131083950010] |Specify an alias as preferred editor in rc files (like .cvsrc)? [131083950020] |I have in my .cvsrc: [131083950030] |I want to create an alias (like rvim or something like that): [131083950040] |And do [131083950050] |Unfortunately, this doesn't work. [131083950060] |What do I have to do to make an alias work inside cvs -e parameter? [131083960010] |I don't know how to get the alias to work inside CVS, but what you can do is write a script which invokes vim the way you want. [131083960020] |The script would simply be: [131083960030] |And then just save it somewhere in your $PATH with the name rvimmy-vim (for example /usr/local/bin/my-vim), and you should be able to use my-vim as a command both on your command-line and from within CVS. [131083960040] |If you don't want to put the script in a global directory on your system or don't have the rights to, just create $HOME/bin, put the script there, and add it to your $PATH. [131083970010] |Set the environment variable CVSEDITOR to your favorite editor. [131083970020] |This is not expanded by the shell, so the tilde won't work. [131083970030] |So write a one-line shell wrapper and call that in CVSEDITOR. [131083970040] |Write this as ~/bin/CVSEDITOR and make it executable: [131083970050] |Then put the following line in your ~/.profile: [131083970060] |If you want to use the same editor for everything, set the EDITOR and VISUAL variables instead (set the both to the same value). [131083980010] |Best way to do full disk encryption? [131083980020] |I have got a bit lost and am not sure which technology to choose for full disk encryption. [131083980030] |It is hard to tell if the information out there is still up to date. [131083980040] |Is there a current best practice solution for this? [131083990010] |I've used True Crypt before and find it to work very well. [131084000010] |There are a number of options. [131084000020] |Distributions like OpenSUSE/SLES offer the option to encrypt one or more partitions at installation time. [131084000030] |Most of the time this is not an option as chances are, you do not want to reinstall your operating system. [131084000040] |I use the "encfs" file system, this should work on both Linux, MacOS and FreeBSD as encfs is built on top of FUSE (http://fuse.sourceforge.net/): [131084000050] |http://www.arg0.net/encfs [131084000060] |The idea is that you can designate certain directories as encrypted directories, and those would require you to enter a password at mount time to access them. [131084010010] |There are basicly 2 'standard' tools for partions: [131084010020] |
  • TrueCrypt - cross-platform, open, plausible deniability
  • [131084010030] |
  • dm-crypt - Linux-specific, uses Linux Crypto API, can take advantages of any crypto hardware acceleration Linux supports, and device-mapper.
  • [131084010040] |There's also cryptoloop, dm-crypt's predecessor [131084020010] |I've been using Debian with encrypted file system and swap on my notebook for three years without problems. [131084020020] |It asks for the password early during the Linux boot and then continues to boot right into my desktop (I disabled the login dialog). [131084020030] |The setup is roughly sda5 -> sda5_crypt -> physical volume dm-0 -> volume group Linux -> logical volumes /dev/Linux/root for / and /dev/Linux/swap for swap. [131084020040] |Swap is encrypted to avoid leaking information. [131084020050] |There is also an unencrypted 200MB boot partition for kernel, grub etc. [131084020060] |I remember that it was a complicated dance in the Debian installer until I got this right. [131084030010] |Export an env variable to be available at all sub shells, and possible to be modified? [131084030020] |Suppose I have [131084030030] |in ~/.bashrc. [131084030040] |I have an opened gnome terminal, and in this terminal, I change $MY_VAR value to 200. [131084030050] |So, if I do [131084030060] |in this terminal, 200 is shown. [131084030070] |Now, I opened another tab in my gnome terminal, and do [131084030080] |...and instead of 200, I have 0. [131084030090] |What should I do to persist the 200 value when a terminal modifies an environment variable, making this modification (setting to 200) available to all subsequent sub shells and such? [131084030100] |Is this possible? [131084040010] |A copy of the environment propagates to sub-shells, so this works: [131084040020] |but since it's a copy, you can't get that value up to the parent shell — not by changing the environment, at least. [131084040030] |It sounds like you actually want to go a step further, which is to make something which acts like a global variable, shared by "sibling" shells initiated separately from the parent — like your new tab in Gnome Terminal. [131084040040] |Mostly, the answer is "you can't, because environment variables don't work that way". [131084040050] |However, there's another answer, which is, well, you can always hack something up. [131084040060] |One approach would be to write the value of the variable to a file, like ~/.myvar, and then include that in ~/.bashrc. [131084040070] |Then, each new shell will start with the value read from that file. [131084040080] |You could go a step further -- make ~/.myvar be in the format MYVAR=200, and then set `PROMPT_COMMAND=source ~/.myvar', which would cause the value to be re-read every time you get a new prompt. [131084040090] |It's still not quite a shared global variable, but it's starting to act like it. [131084040100] |It won't activate until a prompt comes back, though, which depending on what you're trying to do could be a serious limitation. [131084040110] |And then, of course, the next thing is to automatically write changes to ~/.myvar. [131084040120] |That gets a little more complicated, and I'm going to stop at this point, because really, environment variables were not meant to be an inter-shell communication mechanism, and it's better to just find another way to do it. [131084050010] |Don't use environment variables at all. [131084050020] |Use files. [131084050030] |To prevent processes from stepping on each other when updating/reading the file, use lockfiles and small front-end updater scripts whose purpose is just updating a file with $1 if it's not locked. [131084050040] |Lockfiles are implemented by basically checking if a specific file exists (/var/run/yourscript.lck) and if it does, wait for the file to disappear for a while, and fail if it doesn't. Also you must delete the lockfile when done updating the file. [131084050050] |Be prepared to handle a situation where a script can't update a file because the file is busy. [131084060010] |Suppose I have export MY_VAR=0 in ~/.bashrc. [131084060020] |That's your mistake right there. [131084060030] |You should define your environment variables in ~/.profile, which is read when you log in. ~/.bashrc is read each time you start a shell; when you start the inner shell, it overrides MY_VAR. [131084060040] |If you hadn't done that, your environment variable would propagate downards. [131084060050] |For more information on ~/.bashrc vs ~/.profile, see my previous posts on this topic. [131084060060] |Note that upward propagation (getting a modified value from the subshell automatically reflected in the parent shell) is not possible, full stop. [131084070010] |Ctrl + Right and Right send same Sequence in Putty >Screen >Vim [131084070020] |I'm using putty >screen >vim, and screen is sending the same sequence for Ctrl+Right and Right in application mode for vim. [131084070030] |There is an option to make putty send the cursor mode sequences (disable application cursor keys mode) when in application mode and that works, but when screen is introduced, something isn't right. [131084070040] |How would I go about fixing this? [131084080010] |If I set term = xterm in putty, and term=putty in `screenrc, it seems to work. [131084090010] |State of Poulsbo/GMA 500 drivers [131084090020] |Currently, a causal browse through a number of Linux distros show spotty Poulsbo drivers at best. [131084090030] |Has any headway been made recently towards either convincing Intel to coax the driver source out of PowerVR or an acceptable (I can install it without low frame rates, involved steps and without fear that a kernel update will break it) OSS driver solution? [131084090040] |I would love to put Linux on my little Acer netbook but I rely on it too much to install a nerfed driver. [131084100010] |How to create a dupe of a KVM/libvirt/virt-manager VM? [131084100020] |I'm a bit lost with virt-manager / libvirt / KVM. [131084100030] |I've got a working KVM VM (Windows XP) which works nicely. [131084100040] |The VM is backed by a 4GB file or so (a .img). [131084100050] |Now I want to do something very simple: I want to duplicate my VM. [131084100060] |I thought "OK, no problem, let's copy the 4GB file and copy the XML" file. [131084100070] |But then the libvirt FAQ states in all uppercase: "you SHOULD NOT CARE WHERE THE XML IS STORED" [131084100080] |http://wiki.libvirt.org/page/FAQ [131084100090] |OK fine, I shouldn't care. [131084100100] |But then how do I duplicate my VM? [131084100110] |I want to create a new VM that is a copy of that VM. [131084110010] |Apparently virt-clone is the way to go. [131084110020] |I tried duplicating the XML but it wouldn't appear under virt-manager. [131084110030] |I still wonder how I can transfer an XML + .img to a new system that said... [131084120010] |virsh will allow your to edit, export, and import the XML definition for your servers. [131084120020] |I would use virt-clone to generate a cloned image file, and export the XML. [131084120030] |To be safe I would remove the clone configuration from the original server. [131084130010] |Understand error codes in Linux [131084130020] |I am working on Linux (Kernel Version 2.6.32.28) laptop. [131084130030] |After I inserted/did file io/removed a SD combo card, I got following errors: [131084130040] |Now, I would like to understand what these errors mean. [131084130050] |As I saw few standard error codes are located in arch/powerpc/boot/stdio.h and other scattered at various other places.. [131084130060] |Is there any systematic way in Linux to track (& understand) the error codes (in the source) ? [131084140010] |It ultimately ends up in errno.h, after multiple #includes That error is ENOMEDIUM, found in /usr/include/asm-generic/errno.h. [131084140020] |Did you unmount it before removing it? [131084150010] |From the shell, you can run perror: [131084150020] |That comes with MySQL. [131084150030] |If you don't have MySQL, you can use Perl or Python, e.g.: [131084150040] |In a C program you can use the function with the same name: [131084150050] |It prints your message s with the reason for the error appended. [131084150060] |Or you can use: [131084150070] |to return the description of the error as a string so you can inspect it or print it how you like. [131084150080] |See man 3 perror and man 3 strerror for details. [131084160010] |There are standard error values, defined in errno.h. [131084160020] |You can look at this file on your system to see the numerical values. [131084160030] |On most systems, they're in /usr/include/errno.h or a file that it includes. [131084160040] |On Linux, most are in /usr/include/asm-generic/errno-base.h or /usr/include/asm-generic/errno.h, with a few more in /usr/include/bits/errno.h. [131084160050] |If you have a numerical value, call the standard library function strerror or perror to obtain the corresponding error message (in your current locale). [131084160060] |From the command line, a quick way to see an error string is one of [131084170010] |You may look into a little utility called errno. [131084170020] |It is essentially some shell hackery that uses sed to pull out information from the header files mentioned in other answers. [131084170030] |The output looks like the following: [131084180010] |Monitor keeps turning off after 10 Minutes [131084180020] |Possible Duplicate: Disable screen blanking on text console [131084180030] |Hey I use gentoo as a server, so I usually don't even start X. But whats a bit annoying is that the monitor keeps turning off after 10 or 15 Minutes, especially if I emerge something and just wait for compiler to finish. [131084180040] |How can I turn this off? [131084180050] |I already searched google, but all answers I found were related to X or X-based terminals [131084190010] |use 'find' to search for directories !containing certain filetype foo [131084190020] |I have a few directories, some with a depth of 3, which contain mixed file types. [131084190030] |What I need to do is to rm -rf all the subdirectories that do not contain filetype foo. [131084190040] |Is this achievable with find somehow? [131084190050] |I do know that I can use find like this: [131084190060] |to delete all files within the directories that do not contain any file of type *.foo. [131084190070] |Now, how can I use this, to not only delete all unwanted files, but all directories and subdirectories which do not contain *.foo? [131084200010] |These are your directories: [131084200020] |these are the directories that you want to keep: [131084200030] |or [131084200040] |but, I suppose, that if [131084200050] |
  • /aaa/bbb doesn't have a .foo and
  • [131084200060] |
  • /aaa/bbb/ccc does have a .foo
  • [131084200070] |you wouldn't delete /aaa/bbb/ nor /aaa/, right? [131084200080] |So what you really need is to keep these base directories: [131084200090] |and delete all other bases: [131084200100] |for example: [131084200110] |(warning: the code below will not work for pathnames with spaces or special characters) [131084200120] |and then you want to do this recursively: [131084210010] |I'm not quite sure this can be done using only find, but I think we can do it using only bash and find. [131084210020] |Given this test tree: [131084210030] |I get this result: [131084210040] |which I think is what you want, i.e. don't delete dir1, because dir1/dir1.1/dir1.1.1 contains file.foo. [131084210050] |But note that it does process directories multiple times, so it might be slow for large trees. [131084210060] |If efficiency is important, I'd use a more powerful programming language. [131084220010] |(Your question is not clear: if a directory contains some.foo and some.bar, should it be deleted? [131084220020] |I interpreted it as requiring such a directory to be kept.) [131084220030] |The following script should work, provided that no file name contains a newline and no directory matches *.foo. [131084220040] |The principle is to traverse the directory from the leaves up (-depth), and as *.foo files are encountered, the containing directory and all parents are marked as protected. [131084220050] |Any reached file that is not *.foo and not protected is a directory to be deleted. [131084220060] |Because of the -depth traversal order, a directory is always reached after the *.foo files that might protect it. [131084220070] |Warning: minimally tested, remove the echo at your own risk. [131084220080] |For once, I'm not proposing a zsh solution. [131084230010] |IIUYC, you can simply first remove all unwanted files using [131084230020] |which may empty some directories. [131084230030] |Then you can remove the empty directories (and directories containing only empty directories, etc.) like in my question [131084240010] |How to change default log location of SMF registered processes [131084240020] |I am trying to change default log location ( var/svc/logs/) of each and every SMF registered processes and append to particular file (/opt/smf.log). [131084250010] |What is the Firefox repository with all dependencies solved? [131084250020] |I'm using Fedora Core 3, and Firefox needs to be upgraded from version 2.0 to some advanced version, such as 3.0 or 3.6. [131084250030] |I used the remi repository for yum command, but that repository doesn't contain any other dependencies for Firefox v 3.6. [131084250040] |What Fedora repositories can Firefox be installed from with all other dependencies? [131084260010] |I've got to get this out of the way first: [131084260020] |You really, really need to turn off that system. [131084260030] |Security updates haven't been released for it for over five years — and, what's more, that release is from back in the dark ages when the project hadn't really gotten off the ground. [131084260040] |If you must keep it similar, moving to CentOS 5 will give you an only slightly updated set of software but with current updates and some lifecycle left to go. [131084260050] |All that said, if I were forced at gunpoint, the thing I'd try is downloading the non-rpm-packaged tarball of Firefox directly from mozilla. [131084260060] |It may be that the system is still too old for that, but I think it's your best bet. [131084260070] |Well, fourth best bet, behind upgrading to Fedora 15, upgrading CentOS 5, or just throwing the system in a dumpster. [131084270010] |I agree with what other people have answered here. [131084270020] |Better to just upgrade. [131084270030] |You'll have a hard time running a newer firefox on such an old OS. You'll have to upgrade major parts of the OS just to get the more recent Firefox to compile, because it depends on APIs from more recent versions of gtk, gdk, libstdc++, etc. [131084270040] |You're pretty much going to be upgrading the core OS at that point. [131084270050] |If you really must use such old versions of software, you might be able to get away with using RHEL4 or CentOS4, which has about a year more life in it before it is unsupported. [131084280010] |Linux package manager architecture [131084280020] |I am looking for a guide on one of Linux Package manager architectures. [131084280030] |For example apt-get (dpkg), or yum (rpm). [131084280040] |I want to know how they manage the package list, file list, package versions and so on. [131084280050] |I wonder if anyone could point me to some resource. [131084290010] |Well yum is rpm based - http://en.wikipedia.org/wiki/RPM_Package_Manager [131084290020] |And apt is (typically) deb based - http://en.wikipedia.org/wiki/Deb_%28file_format%29 [131084290030] |There are links from both of those pages that explain more [131084300010] |Yum is RPM based and is completely written in Python [131084300020] |Yum or RPM maintains a database(Berkley DB ) usually in "/var/lib/rpm" [131084300030] |The dataabase path is "/var/lib/rpm/Packages" [131084300040] |See below : [131084300050] |[root@ec092 rpm]# pwd /var/lib/rpm [root@ec092 rpm]# file Packages Packages: Berkeley DB (Hash, version 8, native byte-order) when you use the commands like rpm -qa it queries the Packages DB and it maintains all the attributes like package list, file list, package versions and so on. [131084300060] |Ubutu's apt-get is written in C++ and is some what complex. [131084310010] |Weekly cron job to save list of installed packages [131084310020] |I am having trouble figuring out how to set up my first cron job. [131084310030] |I simply want to run this command once every week: [131084310040] |My /etc/crontab file contains the line [131084310050] |So I need to place my command somewhere in the directory /etc/cron.weekly - but in which file? /etc/cron.weekly currently contains the following files: [131084310060] |Thanks for your help [131084320010] |Create a file with the following content (e.g. list_packages.sh): [131084320020] |Place this file in /etc/cron.weekly/ and it will run once a week. [131084330010] |Save this as a script, say installed_packages_list.sh: [131084330020] |Make it executable. [131084330030] |Then run [131084330040] |This will open up a file in a text editor. [131084330050] |Type this and save [131084330060] |The script will be executed once a week, with privileges of the user who scheduled the cron job. [131084340010] |First, for doing this you do not need to execute the job as super user. [131084340020] |The easiest way to solve this is to edit the crontab of your normal user via [131084340030] |Depending on you EDITOR enviroment variable, the users crontab is opened with your favourite text editor. [131084340040] |There you can add something like [131084340050] |You have then to create the pkg.backup.sh file with your dpkg command in it. [131084340060] |Btw, if you want to be able to easily restore your current package state of the machine (aka package selection), then this command line is more target-oriented: [131084340070] |To restore it, you just have to type [131084340080] |Actually, dpkg -l is useful in addition to that if you want to log exact package version number, too. [131084350010] |Truncate a file on a certain pattern [131084350020] |How would I go about truncating a binary file when a certain pattern is found? [131084350030] |For instance, I want to truncate the file at the first occurrence of the pattern 0xFFFFFFFF. [131084350040] |I think something like awk could do the trick... but I'm not exactly sure how. [131084350050] |thanks [131084360010] |I'm quite sure, it's possible using Perl when you do the following [131084360020] |
  • use the command line option -0777 to slurp all the input at once
  • [131084360030] |
  • use Latin-1 coding
  • [131084360040] |
  • use s/\xFF\xFF\xFF\xFF.*//s
  • [131084360050] |I did something like this a long time ago, but I don't recall the options anymore. [131084360060] |For awk I have no idea. [131084370010] |Convert to octal and assign to the record separator. [131084380010] |Is cygwin like wine, but for linux applications? [131084380020] |Does cygwin work line wine, providing a compatibility layer inside a foreign os? [131084390010] |Does Cygwin work like wine? [131084390020] |No. [131084390030] |Does it provide a compatibility layer inside a foreign OS? [131084390040] |Yes. [131084390050] |Wine can run Windows executables on Linux, but Cygwin cannot run Linux executables on Windows. [131084390060] |Instead, Linux programs have to be compiled specifically for Cygwin, whereby the aim of the Cygwin project is to make that as straightforward as possible, i.e. it's aiming for source compatibility rather than binary compatibility. [131084390070] |However, Winelib, which is part of the Wine project, essentially is Cygwin in reverse: it provides a source compatibility layer that allows Windows programs to be compiled into Linux executables. [131084400010] |How to access a shared directory with Virtualbox OSE [131084400020] |I would like to share data between a VirtualBox OSE guest OS and the host. [131084400030] |This location should be also writable by either of the two. [131084410010] |Open the Virtualbox gui; select the guest you wish to have the shared directory and select "Settings". [131084410020] |In the dialog box, select the "Shared Folders" tab (on the left). [131084410030] |Click on the "+" button on the right to create a new location (of an existing directory on the host). [131084410040] |On the next guest boot, the location will be available to the guest OS (depending on how it accesses drives). [131084410050] |As long as the underlying filesystem is writable to the Virtualbox guest, then it will be writable by both guest and host. [131084420010] |If you want to share the other way around. [131084420020] |Then set up a network share in the guest, and connect to in from the host. [131084420030] |Note: Use a host only network interface. [131084430010] |How to trace a process to original user? [131084430020] |Say, on a Solaris server, user1 logs in, switches to someother user, say sruser, using su - and then starts a process of id X. [131084430030] |And then another user user2 logs in, switches to sruser the same way and starts a process of id Y. (Multiple users can log in and swithch to sruser simultaneously.) [131084430040] |In the above scenario, is there any way we can trace the processes of ids X and Y to the actual users user1 and user2? [131084430050] |How can we trace a process to the original user? [131084440010] |There is no standard way, and as far as I know there is no Solaris way either. [131084440020] |If there was a reliable way to find out the “ancestor” user, the usual unix security model where root can do everything would not hold. [131084440030] |You can check each process's parent recursively. [131084440040] |(Warning, untested script.) [131084440050] |This is not guaranteed to give any result, though. [131084440060] |Maybe the parent's parent's parent process has exited. [131084440070] |You can check on what terminal a process is running (ps -o tty= $pid) and check who last logged in on that terminal (who, last). [131084440080] |This is only indicative, though: root can start processes on any terminal. [131084440090] |What do you expect to see if user1 runs su user2 -c 'su user1 -c …'? [131084440100] |It's conceivable that the OS would track the successive authentication chains in the kernel data structures associated with the process, but that's very far from the usual unix security model. [131084450010] |there are commands to generate the process tree. [131084450020] |The 'ps' take options 'fu' , and i think sun has ptree.