[131072960010] |Bash flush standard input before a read [131072960020] |Is there an easy way in bash to flush out the standard input? [131072960030] |I have a script that is commonly run, and at one point in the script read is used to get input from the user. [131072960040] |The problem is that most users run this script by copying and pasting the command line from web-based documentation. [131072960050] |They frequently include some trailing whitespace, or worse, some of the text following the sample command. [131072960060] |I want to adjust the script to simply get rid of the extra junk before displaying the prompt. [131072970010] |This thread on nonblocking I/O in bash might help. [131072970020] |It suggests using stty and dd. [131072970030] |Or you could use the bash read builtin with the -t 0 option. [131072970040] |If you only want to do it if the user is at a terminal, try this: [131072980010] |Deleting all files in a folder except files X, Y, and Z [131072980020] |I have a lot of files and folders in a specific folder and I want to delete all of them; however, I wanted to keep files X, Y, and Z. [131072980030] |Is there a way I can do something like: rm * | but NOT grep | X or Y or Z [131072990010] |Instead of using rm, it may be easier to use find. [131072990020] |A command like this would delete everything except a file named exactly 'file' [131072990030] |Many versions of should be able to support globbing and regular expression matching. [131072990040] |You could also pipe the output of find to rm as well [131073000010] |Move the files you want to keep away. [131073000020] |Go up one level, delete the folder. [131073000030] |Re-create the folder and move those files back. [131073010010] |Using zsh, with setopt EXTENDED_GLOB [131073010020] |But, you should probably instead move the files elsewhere, then delete everything. [131073010030] |It's far safer in terms of finger slips, such as hitting enter too soon. [131073030010] |Attention: Run the command and if the files to be deleted are the right ones, run it again and delete the hash character "#". [131073030020] |If the filenames are more complicated then that, do [131073030030] |Again, first look at the results then remove the hash sign. [131073030040] |This version - as suggested in the comments - saves some characters and looks a bit clearer. [131073040010] |Later versions of bash have the extglob shell option that gives you a syntax for doing what you want (check your man page under "Pathname Expansion" to see if your installed version has it): [131073040020] |To test, I suggest you first replace rm with echo to see if the list of files to be deleted is what you expect. [131073050010] |What software can I use to do live screen-casting in linux? [131073050020] |I'm looking for a software to do live screen-cast of our local user group meeting. [131073050030] |What software can I use to do that? [131073050040] |Ideally I'd like to capture the computer screen and speaker's audio and stream it live? [131073050050] |Edit: I'm not looking to just record my desktop and upload the video. [131073050060] |I'm trying to live stream the desktop as it is happening. [131073060010] |Without experience with screencasts, this is the way to search the repository for keywords like this: [131073060020] |The result is from xUbuntu 9.10 - your result may vary; give it a try. :) [131073070010] |The Ubuntu Screencast Team uses gtk-recordmydesktop. [131073080010] |I've read about using ffmpeg for screengrabbing before. [131073080020] |Check out ffmpeg with X11 grabbing + ffserver. [131073080030] |There may be some progressive deterioration in A/V syncing though. [131073090010] |Try ffmpeg with something like this: [131073090020] |ffmpeg -vcodec mpeg4 -r 10 -g 300 -vd x11:0,0 -s 1280x1024 http://localhost:8090/feed1.ffm [131073090030] |If it's not working right with the exact settings from the example, see the ffmpeg webpage and documentation for more details: ffmpeg.org [131073100010] |Use WebcamStudio for GNU/Linux. [131073100020] |(Reference: Live screencasting to ustream) [131073100030] |As their website says, [131073100040] |WebcamStudio For GNU/Linux creates a virtual webcam that can mix several video sources together and can be used for live broadcasting over Bambuser, UStream or Stickam [131073100050] |View the demo here. [131073110010] |VLC has a built-in desktop stream. [131073110020] |I don't recall if it does audio too, howerver. [131073110030] |If you need something quick you can try Big Blue Button's VMware image. [131073110040] |It sets up a server that can stream desktop, video, audio, and whiteboard. [131073120010] |Not a very geeky answer, but skype has a "share screen" option. [131073130010] |How do I switch to the Oxygen GTK feature in KDE? [131073130020] |I want to try the new oxygen to gtk port for kde 4.6. [131073130030] |But I'm not sure how to enable it. [131073130040] |I currently use qtcurve for gtk. [131073140010] |If I'm reading the question correctly, you want to use a GTK theme provided by KDE. [131073140020] |All you have to do is to modify ~/.gtkrc-2.0 (there are a few applications allowing you to select the theme in a GUI, i.e. lxappearance) and start an application using GTK. [131073150010] |In this link you can download it. [131073160010] |I believe merely having the Oxygen GTK Theme is not enough to enable it. [131073160020] |There is a seperate KCM module for KDE which permits KDE, infecting shall we call it, GTK applications with its presence. [131073160030] |When installed, ( when it works that is, its not working for me right now :( ) it appears in the KDE SystemSettings in the "Application Appearance" control, and lets you [131073160040] |
  • Set GTK Font sizes ( either to an arbitrary size, or "match KDE" )
  • [131073160050] |
  • Set GTK ColourScheme ( either to an arbitrary pallete, or "match KDE")
  • [131073160060] |
  • Set GTK Widget Set ( either an arbitrary GTK theme, or the oxygen one )
  • [131073160070] |On Gentoo, this module is available as kde-misc/kcm_gtk, and the metadata says the homepage is : http://gtk-qt.ecs.soton.ac.uk , however, that just gives me a dead site :/ [131073170010] |Arch Linux + Gnome: Wine going berserk on "open with" menu. [131073170020] |First of all, when I installed Wine on my system, it decided to automatically associate its "notepad" application with all types of text and image files. [131073170030] |This was annoying enough, but on top of that, whenever I choose the "open with" dialog, I find that Wine has filled it with hundreds of duplicate entries. [131073170040] |Most of them are just labeled "A Wine application" with no icon, but any time I install another program with Wine, it adds 5-10 entries for that application as well. [131073170050] |How do I fix or disable this? [131073180010] |you can remove all the crap by editing ~/.local/share/applications/mimeinfo.cache - in my install there's never anything but wine-created crap in there, so I just trash it altogether. [131073180020] |How to stop wine doing it in the first place? [131073180030] |Dunno, sorry! [131073190010] |This might be useful for you: http://wiki.winehq.org/FAQ#head-c847a3ded88bac0e61aae0037fa7dbd4c7ae042a [131073200010] |Launch a GNOME session from terminal [131073200020] |I'm sshing into my friends machine and I'm wondering how would i launch a GNOME session over SSH? [131073200030] |I need to open a web browser on his machine to view something which can only be done from his hostname. [131073200040] |What's the easiest way to achieve this via SSH? [131073210010] |You can use ssh -X or ssh -Y to his machine to run apps on your friend's machine but using your Xorg. [131073210020] |The web browser will still be making the connection from his hostname. [131073220010] |Switch to a second terminal, for example tty2: CtrlAlt-F2, login and start a new X session on an available display: [131073220020] |Now ssh to the other machine, enabling X forwarding (or trusted X forwarding with -Y): [131073220030] |Once logged in, start a new gnome-session: [131073220040] |You can also pass gnome-session as a command to ssh. [131073230010] |If all you need to do is run a web session, appearing to come from your friend's computer, I'd suggest just running OpenSSH with the ssh -D8888 argument (8888 is just an example), and set up your local browser to point to localhost:8888 as a SOCKS5 proxy. [131073230020] |If you must run a browser over the link, there's no reason why you need to start up an entire GNOME session, just run ssh -X as described in the other questions, and then run the browser alone. [131073240010] |Are there distros that still ship GTK+ 1? [131073240020] |GTK+ 1 has been deprecated some years ago, and I'm curious if there's still anyone shipping it and/or apps using it. [131073240030] |Also, are there still actively-developed apps using it? [131073250010] |RHEL4 (and CentOS4) still ships gtk+-1.2 packages. [131073250020] |It looks like their gnome-libs package uses it. [131073260010] |ArchLinux still allows you to install GTK1. [131073260020] |The package page also lists a few apps depending on it. [131073270010] |Slackware ships with gtk1.2. [131073270020] |Currently (13.1) ships gtk+ 1.2.10 (as well as gtk-2 of course). [131073270030] |I believe debian/ubuntu has legacy packages for it too (but they're not installed by default) [131073270040] |I don't think newer stuff uses it, but a lot of already-existing, good (perhaps even maintained) software uses it. [131073270050] |I see it from a computer power perspective. gtk-2 and the software written for it, like many newer things have a lot more candy and convenience built in, with negligible performance loss on a modern system. [131073270060] |Step back ten years though and you can tell a sharp difference in performance, where the gtk-1.2 equivalent software runs smoothly. [131073280010] |Current Fedora 15 includes gtk+ 1.2.10. [131073280020] |The package release is "71" — looks like it's gone through a lot. [131073280030] |What uses it? [131073280040] |In Fedora rawhide: [131073280050] |File that under "huh, lookit that". :) [131073280060] |None of these apps appear to be actively developed — those that are, like dillo, have newer versions that use different toolkits. [131073280070] |I can't imagine why an app that was actively worked on wouldn't have migrated by now. [131073290010] |Software developer switching from Linux to OS X, what are the gotchas? [131073290020] |I have used Ubuntu/Fedora/Red Hat/Suse but haven't used OS X at all. [131073290030] |If I have to start using OS X regularly, what are the things I should look out for? [131073290040] |Tools I use are GNU tool chain, C++/Boost, etc. [131073300010] |I did the same, years ago. [131073300020] |Off the top of my head, then: [131073300030] |
  • Your average desktop Linux has a richer userland than that of any other *ix I've used. [131073300040] |You'll probably miss different tools than I did, so no sense getting specific about recommendations for replacements. [131073300050] |Instead, just install Fink first thing. [131073300060] |Fink provides a Debian-like APT based software repository for OS X. Not everything you might want is available through Fink, but a whole lot is. [131073300070] |MacPorts is a popular alternative to Fink, taking more of a *BSD Ports approach.
  • [131073300080] |
  • As of Snow Leopard, the default compiler builds 64-bit binaries by default, instead of 32-bit as in previous versions of the OS. [131073300090] |This can cause problems in several ways: maybe you have old 32-bit libraries you can't rebuild but have to link to, maybe you're still running your system in 32-bit mode, etc. [131073300100] |One way to force a 32-bit build is to specify gcc-4.0 instead of gcc, since gcc is linked to the newer 64-bit gcc-4.2. gcc-4.0 is the old 32-bits-by-default Leopard compiler. [131073300110] |Another way is to try to remember to add -m32 to all your CFLAGS, CXXFLAGS, LDFLAGS...
  • [131073300120] |
  • Dynamic linkage is vastly different. [131073300130] |If you're the sort to write your ld commands by hand, it's time to break that habit. [131073300140] |You should instead be linking programs and libraries through the compiler, or using an intermediary like libtool. [131073300150] |These take care of the niggly platform-specific link scheme differences, so you can save the brain power for learning programs you can't abstract away with portable mechanisms. [131073300160] |For instance, you'll need to update your muscle memory so you type otool -L someprogram instead of ldd someprogram to figure out what libraries someprogram is linked to.
  • [131073300170] |
  • OS X handles the CPU compatibility issue differently than Linux. [131073300180] |On a 64-bit Linux where you have to also support 32-bit for whatever reason, you end up with two copies of things like libraries that need to be in both formats, with the 64-bit versions off in a lib64 directory parallel to the traditional lib directory. [131073300190] |OS X solves this problem differently, with the Universal binary concept, which lets you put multiple binaries into a single file. [131073300200] |You can currently have up to 4 CPU types supported by a single executable: 32- and 64-bit PowerPC, plus 32- and 64-bit Intel. [131073300210] |It's easy to build Universal binaries with Xcode, but a bit of a pain with command line tools. [131073300220] |You probably can't avoid learning how to do this during this present transition to 64-bit computing, however.
  • [131073300230] |
  • If you install any libraries through MacPorts, they are installed under /opt instead of /usr or /usr/local. [131073300240] |Add this to your .profile/.bashrc. [131073300250] |
  • The compiler and other developer tools aren't installed on OS X by default. [131073300260] |Also, old versions of these tools are often not available at the Apple developer site. [131073300270] |There's an "Optional Installs" item on the original OS CD/DVD that came with the computer or the OS X upgrade disk that has appropriate compilers for that OS. [131073300280] |Apple being Apple, you have to have a recent version of the OS to run the latest compilers, so if you keep old versions of the OS around for testing, you'll need to use the contemporaneous version of Xcode on that machine.
  • [131073310010] |HUGE GOTCHA -- Mac OS filesystem is NOT case sensitive. [131073320010] |Although Fink and MacPorts are the traditional means of getting unix packages on OS X, I'd recommend checking out a newer tool called brew that has worked better for me and messed less with my system, and is much easier to use. [131073320020] |It basically just downloads tarballs and installs to /usr/local, but it automates the whole process very nicely. [131073320030] |http://mxcl.github.com/homebrew/ [131073330010] |Swap root at runtime [131073330020] |I am developing an embedded Linux system. [131073330030] |The system is usually installed by creating a ISO file which is written to a USB stick the board can boot from. [131073330040] |To make the installation possible to do automatically (say, over night) I would like to be able to do the installation on the board while the old system is running. [131073330050] |My installation has two parts: An initrd file which contains busybox and install scripts, and a .tar.gz archive that has the rest of the root file system to install. [131073330060] |
  • The bootloader loads the kernel and points it to the initrd, and boots the kernel.
  • [131073330070] |
  • The initrd install scripts mounts the target drive /dev/sda, formats it, installs the bootloader, and finally copies the root file system from .tar.gz and initrd.
  • [131073330080] |Now I instead want to [131073330090] |
  • Copy install.iso from host computer to target device. [131073330100] |(No problem)
  • [131073330110] |
  • Do the installation steps as above.
  • [131073330120] |My problem is that I don't know how I should go about replacing the currently running system with my new one. [131073330130] |I assume that the currently mounted root (/) would have to be unmounted and replaced by the initrd. [131073330140] |But I can't seem to figure out how! [131073340010] |I can think of different ways to do what you want. [131073340020] |All of which carry a level of risk and difficulty. [131073340030] |The main risk being that if the install goes wrong/breaks you will end up with an unbootable system that needs installing manually. [131073340040] |My main thought (which depends on your boot loader and similar) would be to use exactly the procedure that you have now. [131073340050] |Basically copy the new install image onto your USB stick which is permanently left in the machine. [131073340060] |Then just reboot and let it boot from that and install normally. [131073340070] |It relies on the following [131073340080] |
  • Hands-off install. [131073340090] |I'm assuming that you have that otherwise an overnight reinstall would not be a problem.
  • [131073340100] |
  • Your boot loader being able to choose between USB or local filesystem boot automatically (or via an application level command before rebooting)
  • [131073340110] |
  • At the end you need to configure your boot loader to boot from the local board rather than the USB device or just erase the contents of the USB device/make it unbootable so that the bootloader falls over
  • [131073340120] |An alternative to that would be to have two boot/root partitions on your board and just install into the one that you aren't using and at the end of the reboot force your bootloader to boot into the other. [131073340130] |You could use a chroot environment to force your installer to think that it was booting from scratch. [131073340140] |That is probably a big change in your environment though and would not be a quick win. [131073350010] |Have you tried mounting initrd and then pivot_root? [131073350020] |It seems what you want to do. [131073360010] |How to make dpkg faster? [131073360020] |Package (un)installation on a Debian system is horribly slow, partly because it works with a whole bunch of (small) files. [131073360030] |Short of getting a faster storage, how do I speed it up? [131073360040] |I'm thinking maybe loading some highly-accessed directory onto RAM as one solution, but how do I do that? [131073360050] |Is there a 'better' solution? [131073370010] |For this solution you are going to need a version of dpkg of at least 1.15.8.6. [131073370020] |Since that version of dpkg, there is a new force-unsafe-io option that will disable dpkg from calling sync() and the like between every package. [131073370030] |You can add this option to the config in /etc/dpkg so that it is always in effect. [131073370040] |If you have an older version of dpkg you can alternatively use eatmydata. [131073370050] |And remember both of these solutions are unsafe and probably will lead to data loss if there is a power failure while or shortly after dpkg is running. [131073380010] |Can driver installation procedures be made into a single package? [131073380020] |Here is an example of the instructions for installing a Brother printer: [131073380030] |http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/instruction_prn3.html [131073380040] |and this just a part of the procedure... there are separate "pre-install" procedures and separate procedures for scanning and fax drivers. [131073380050] |Couldn't the vendor produce a single package for each model of hardware that would "do everything"? [131073380060] |If so, why don't they? [131073380070] |And could the "community" do it for them? [131073390010] |This makes me remember the process of installing the proprietary driver for my ATI graphics card. [131073390020] |The package from ATI is really the "single package" that you are talking about. [131073390030] |It does everything needed for the driver to work. [131073390040] |So, from the experience above the answer is yes, the vendor can provide convenient Linux packages. [131073390050] |However, doing so would require extra effort (read: money). [131073390060] |Some company do, some don't, and some provide a "best effort" support. [131073390070] |What the community really want, is for the companies to release hardware specifications needed to implement opensource versions of the drivers, so as not to depend on the company to release updates/bugfixes for the drivers. [131073390080] |However, some companies want to keep a secret over their specifications, which makes opensource drivers not universally available. [131073390090] |In your case I think the process is still pretty simple, just a config file and service restarting. [131073390100] |Of course the "community" (that includes you) could have wrapped it in a package, the same way that flash is included in many distros. [131073390110] |Maybe the piece of hardware is just not popular enough to gain attention among packagers... [131073400010] |Select/Paste Word-Wrap on X-Based Terminals [131073400020] |Is there a sure-fire method to cut &paste word-wrap on X-based terminals? [131073400030] |That is, if I select, then paste via button-3, if the text goes to the end of the line and wraps, the paste assumes carriage return and inserts it. [131073400040] |I'd rather: [131073400050] |This drives me crazy. [131073400060] |Especially when pasting code that is >80 columns. [131073400070] |Sometimes it works, most of the time it doesn't. [131073410010] |I agree this makes a huge difference in usability. [131073410020] |I find it very important to be able to triple-click and copy/paste lines which have wrapped. [131073410030] |There are two ways this can work correctly. [131073410040] |Either triple-click (and click drag) selects past the end of line onto the next visible line, with no CR inserted. [131073410050] |Or else triple click selects only one visible line, but when you paste it, there is no CR. [131073410060] |The broken way is when triple-click selects only the visible line, and then it assumes a CR at the end when it pastes. [131073410070] |I have seen bad behavior from xterm and gnome terminal. [131073410080] |I've seen good behavior from Mac Terminal app, and from iTerm which is another Mac terminal app. [131073410090] |I have no hints for turning a bad situation into a good one other than trying a different terminal program. [131073410100] |I'm certain this ties into whether the terminal program actually remembers the locations of all the CRs it sees, or whether it simply interprets the CR and then forgets it. [131073420010] |Unix terminals don't do word wrap. [131073420020] |This is a feature of the application running inside the terminal. [131073420030] |The terminal receives the same instructions to display [131073420040] |If you can, tell your application not to do any wrapping. [131073420050] |This way the terminal will wrap, but always on the last column. [131073420060] |If you triple-click on this, you get This is a single long line of text with a newline only at the end (or no newline at all depending on the terminal emulator). [131073420070] |When pasting, the terminal just sends the text to the application, as if you had typed it. [131073420080] |If you observe any behavior such as wrapping that depends on the terminal width, it's due to the application, not to the terminal. [131073430010] |Flash using wrong audio output with Pulseaudio [131073430020] |I'm using Arch Linux x86_64 with Gnome and PulseAudio. [131073430030] |I have USB speakers, which work for the sounds in system menus and most other applications. [131073430040] |But whenever I play Flash videos, they output their sound through the onboard sound card (which usually isn't hooked up to anything). [131073430050] |This happens in both Firefox and Chromium. [131073430060] |I can't find anything that would let me redirect Flash's audio output to my USB speakers. [131073430070] |The following relevant packages are installed: [131073440010] |To begin with, one very good source for information on pulseaudio problems is the perfect setup page. [131073440020] |I wonder, do all ALSA applications have this problem? [131073440030] |For example, if you force, say mplayer to use alsa ("mplayer -ao alsa ...") does that go to your USB speakers? [131073440040] |An other perhaps more direct reason for this could be that your flash plugin isn't configured properly. [131073440050] |You do this by surfing to the settings manager (that link actually leads to it immediately; it's not a screenshot or tutorial); see if the speaker output is configured correctly. [131073450010] |The solution I found was to install the libflashsupport-pulse package from AUR and restart my computer. [131073460010] |Can I safely delete ~/Trash file? [131073460020] |My Uni account is over disk quota and requesting more quota takes time. [131073460030] |Unfortunately, I need disk space now and I noticed that the Trash file in my home directory is quite a large file. [131073460040] |Now I’m a total postfix noob but I’m assuming that this somehow relates to my mail inbox? [131073460050] |(I do have some doubts, since although his file clearly contains email messages, tail Trash reveals that the last message is from 2006). [131073460060] |Furthermore, my mail client (connected via IMAP) reveals that my trash folder is empty. [131073460070] |Can I just delete the Trash file or do I need to fear dire consequences? [131073460080] |The very first message in that file reads as follows: [131073460090] |Furthermore, there’s another file in my home folder called Deleted Messages – what’s the difference between the two? [131073470010] |This Trash file is unrelated to Postfix. [131073470020] |It's also probably not what you see over IMAP: while the IMAP server could be configured to serve this file as a mail folder called Trash, it's likely that it's showing a directory called Trash near other directories that you see over IMAP. [131073470030] |The mail you show is an oddity of Pine. [131073470040] |It's likely that you used Pine at some point, and that you or your administrator configured it to save deleted mails into this Trash file. [131073470050] |Deleting this file is unlikely to cause any trouble (of course, make a backup on your own PC or wherever just in case one of your old deleted mails turned out to be important). [131073480010] |Ubuntu 10.04 bad install...now Mac wont boot... [131073480020] |Hey all, I've been having a problem all day. [131073480030] |It all started when I tried to partition my HD to install Ubuntu with bootcamp on my mac. [131073480040] |The install froze and when I restarted the apple logo was replaced with a "do not enter" sign (circle with slash). [131073480050] |I've been searching forums for a solution and even tried to run a LiveCD but that failed. [131073480060] |I think I need to remove GRUB from my MBR. [131073480070] |Anyone have some suggestions/help? [131073480080] |Much appreciated. [131073490010] |did you try booting from an OSX boot disc ? then as it wants to start (re)installing OSX,just stop and go into disk utilities to see if you can restore your OSX partition as the boot partition.. i suppose the Live CD you are referring to is an Ubuntu 10.04 cd ..right ? [131073490020] |tell me what you tried already [131073500010] |How do I reinstall software properly? [131073500020] |I got myself into trouble with mysql on debian lenny. [131073500030] |Details can be found here [131073500040] |I tried dpkg-reconfigure mysql dpkg --purge mysql-server apt-get install mysql-server mysql_install_db and variants of these in different order using killall mysql and killall mysqld and had no luck whatsoever. [131073500050] |I even deleted every mysql folder listed in whereis mysql [131073500060] |How do I properly reinstall a package? [131073500070] |Because the above is not working for me. [131073510010] |On Ubuntu one can go into synaptics and search for the software to remove. [131073510020] |When right-clicking to see the contextual menu, one can see and select the option called "Mark for removal" (or "Mark for complete removal" if you want to purge the downloaded installer as well) [131073520010] |The preferred way is via apt: [131073530010] |Shell script execution on multiple servers [131073530020] |Is it possible to run one part of the ksh/sh script on one server then ssh to another server and continue with the rest of the script? or is there a way around? [131073530030] |Anyone had experience with this ? [131073530040] |here are some details: [131073530050] |I have a user with which I don't have to authenticate each time I access a server, so I can hop from one server to another without any keyboard interaction [131073530060] |I've already tried this, wanted to seperate logic into another script then : [131073530070] |I tried this but it doesn't seem to execute this script on another server [131073530080] |and one more thing the whole code above is part of the for loop and I basically execute same script for list of servers. [131073530090] |For testing purposes, the list has two servers, for some weird and twisted reason only the last/seconds server script works as I expect. [131073530100] |another detail: [131073530110] |all server can see myscript.ksh it is visible and executable to all of them [131073540010] |To be honest your wording is a bit vague but let's trace possible causes: [131073540020] |What you described should work just fine, but see ssh-copy-id source to look for discrepancies with your own script (it is a shell script bundled with OpenSSH). [131073540030] |You mentioned that the command does not work on one of your machines, maybe the server has a different setup preventing the script to start in the first place. [131073540040] |Also, you mentioned authentication, so I suppose you're using passwordless public key authentication, and you might want to restrict the commands authorized with the associated key: see man ssh, section AUTHORIZED_KEYS FILE FORMAT. [131073550010] |It sounds like you want to run a sequence of commands on another server without having to log in multiple times. [131073550020] |To accomplish this, you could do: [131073550030] |This would run remote_command 1, and when it finishes, run remote_command 2, then remote_command 3 all on the remote server [131073560010] |Holding left arrow triggers permanent Mode_switch [131073560020] |This is a weird one... [131073560030] |I recently upgraded from Debian lenny to squeeze (following the upgrade instructions step by step). [131073560040] |Everything went surprisingly well, except for one piece of strange new behavior I haven't encountered before. [131073560050] |First, the left arrow key won't work at all (although right, up, and down do). [131073560060] |Second, if I keep the left arrow key held down for a few seconds and then release it, I get trapped in Mode_switch mode. [131073560070] |In other words, I have a .Xmodmap file with the following: [131073560080] |After I've held down the left arrow key for a few seconds, every a or e character I type is accented, and the only way I can revert this is to log out and log back in to my Gnome session. [131073560090] |I know something keyboard-related changed from lenny to squeeze, but I don't know how to troubleshoot something like this. [131073560100] |Any ideas what's wrong? [131073570010] |After looking further, I found that the keycode set as Mode_switch actually didn't correspond to my Alt_R key as I intended (I changed keyboards a while ago, but didn't notice this until the upgrade). [131073570020] |Setting the keycode to the correct key fixed the problem. [131073580010] |Why use cpio for initramfs? [131073580020] |I am making my own initramfs following the Gentoo wiki. [131073580030] |Instead of the familiar tar and gzip, the page is telling me to use cpio and gzip. [131073580040] |Wikipedia says that cpio is used by the 2.6 kernel's initramfs, but does not explain why. [131073580050] |Is this just a convention or is cpio better for initramfs? [131073580060] |Can I still use tar and gzip? [131073590010] |I'm not 100% sure, but as the initial ramdisk needs to be unpacked by the kernel during boot, cpio is used because it is already implemented in kernel code. [131073600010] |From what I remember of my old SysV days, cpio could handle dev files, but tar could not; this made cpio the 'raw' backup utility of choice before dump came around. [131073600020] |It was also easier to handle partial filesets and hard links so incremental backups were easier. [131073600030] |I think that GNU tar has caught up with cpio features so now it is just a matter of user comfortability. [131073600040] |Both cpio and tar should be installed by default. [131073610010] |Force less to display a file as text [131073610020] |Sometimes less wrongly recognize file as binary and tries to show hexdump on LHS (usually ones with non-alphanumeric characters but still containing printable ASCII characters). [131073610030] |How to force it to recognize it as text? [131073620010] |I think that you are looking for [131073630010] |I think you have (or your distribution has) a LESSOPEN filter set up for less. [131073630020] |Try the following to tell less to not use the filter: [131073630030] |For further exploration, also try echo $LESSOPEN. [131073630040] |It probably contains the name of a shell script (/usr/bin/lesspipe for me), which you can read through to see what sort of filters there are. [131073630050] |Also try man less, and read the Input Preprocessor section. [131073640010] |Selectively disabling gvfsd-cdda in Debian Squeeze? [131073640020] |Is there a way to selectively disable gvfsd-cdda on Debian Squeeze? [131073640030] |Since I updated my machine to Squeeze grip can no longer eject a CD, which interferes with ripping. [131073640040] |I traced it back to gvfsd-cdda, but found no preference or configuration to disable it. [131073640050] |I can't uninstall package gvfs-backends either, because it is required by gnome-core. [131073640060] |I did find /usr/share/mounts/cdda.mount, but disabling that feels like an ugly hack that will be overwritten on the next update of the package. [131073650010] |I have no idea if there's a way to fix or cleanly disable gvfsd-cdda, but you can move it out of the way without running into trouble with the package manager. [131073650020] |Debian (and more generally any distribution using dpkg) has a generic mechanism for providing your own version of a file that's normally under the package manager's control. [131073650030] |If you find you must change /usr/lib/gvfs/gvfsd-cdda or /usr/share/mounts/cdda.mount, use dpkg-divert so that the package's version will be diverted to a different file name: [131073650040] |or perhaps [131073660010] |PHP imageline function doesn't work. What and how should I download? [131073660020] |PHP function imageline on localhost works, but on VPS cloud hosting doesn't. [131073660030] |I guess I have download something, but I don't know what. [131073660040] |Could you give me a hand? [131073660050] |P.S. PHP GD is already downloaded. [131073670010] |Downloaded or installed? [131073670020] |There is not enough to just download it, you need to install and enable it. [131073670030] |You didn't provide what OS is there on your VPS, so I can't say how exactly. [131073670040] |In CentOS/RHEL/Fedora you can do it by using yum install php_gd, enabling gd extension in php.ini and restarting httpd server. [131073680010] |How can I make a user able to log in with ssh keys but not with a password? [131073680020] |I would like to create a user and have no password. [131073680030] |As in you cant log in with a password. [131073680040] |I want to add keys to its authorized_keys by using root. [131073680050] |This is for my automated backup system. [131073690010] |Just don't set password for user. [131073690020] |If there is already a password remove it by using passwd -l . [131073690030] |But it would be still possible to log in as this user - by using authorized_keys, or via su or sudo from some privileged account. [131073700010] |Use of passwd -d is plain wrong , at least on Fedora, on any linux distro based on shadow-utils. [131073700020] |If you remove the password with passwd -d, it means anyone can login to that user (on console or graphical) providing no password. [131073700030] |In order to block logins with password authentication, run passwd -l username, which locks the account making it available to the root user only. [131073700040] |The locking is performed by rendering the encrypted password into an invalid string (by prefixing the encrypted string with an !). [131073700050] |Any login attempt, local or remote, will result in an "incorrect password", while public key login will still be working. [131073700060] |The account can then be unlocked with passwd -u username. [131073700070] |If you want to completely lock an account without deleting it, edit /etc/passwd and set /sbin/nologin or /bin/false in the last field. [131073700080] |This will result in "This account is currently not available." for any login attemp. [131073700090] |Please refer to passwd(1) man page. [131073710010] |ffmpeg and libmp3lame produces bad audio quality? [131073710020] |Hello, when I get a flash video from YouTube, why is the quality of the audio much worse than the origin video on YouTube? [131073710030] |When I downloaded the flash movie, I convert it to avi like this: [131073710040] |I already set -aq (audio quality) to 300, but no difference to 100 or 200. [131073710050] |Moreover 100 is the max. value in my opinion. -ar (frequecy) 44100 should be ok too and the bitrate in bit/s (-ab) should be 256kb/s (2097152 / 1024 / 8). [131073710060] |I am not sure what is the right bitrate for a good quality but I think 256kb/s should be fine. [131073710070] |Or did I calculate it wrong? [131073710080] |What could be the problem? [131073720010] |This is the command line you want: [131073720020] |Using the video you suggested as example i have almost the same quality in vlc as original (original has aac encoding). [131073720030] |You were specifying a way too high bitrate (2Mb/sec, 192kb/sec is far enough), i don't think it had any collateral effect on your command line though. [131073720040] |The difference is made by -qscale 8 which let ffmpeg output a VBR mp3 instead of a CBR stream. [131073730010] |How to get elapsed time from two "text based" dates. [131073730020] |I would like to know if there is a way to calculate the difference between two times in bash. [131073730030] |I have two fields that are extracted from a log file: [131073730040] |Start time: Feb 12 10:02:10 End time: Feb 12 10:53:15 [131073730050] |What I need is a manual way to get the elapsed time between these 2 dates. [131073730060] |Since the time values are "text based", and not from a date generating command, I don't know if there's a way to do it. [131073730070] |The times are 24 hour based, so hopefully that'll make times spanning midnight easier to handle. [131073730080] |Any suggestions will be greatly appreciated. [131073730090] |Peter V. [131073740010] |You could use some evil trickery: [131073740020] |will return the number of seconds since the epoch for that date. [131073740030] |Combining this with the same command (for the other date) and running through bc: [131073740040] |Will give you [131073740050] |the number of seconds between the times. [131073740060] |You could probably parse this out with awk somehow if you needed to run it more than once. [131073750010] |Change default for Nautilus (file_manager) from "Forget password immediately"? [131073750020] |When opening a tab wich needs to mount a Samba share Nautilus 2.30.1 displays a window: [131073750030] |How can one set the default to something other than "Forget password immediately"? [131073760010] |How to invoke an Openoffice macro from the Linux command line. [131073760020] |I have an OpenOfice macro that I want to use to process the contents of an OpenOffice file. [131073760030] |I am able to do this by opening the file with OpenOffice and then running the macro. [131073760040] |How do I invoke the macro from the Linux command line without using the GUI? [131073760050] |Something like: [131073770010] |Create an event-driven macro assigned to the Open Document event for a particular document or a common document. [131073770020] |Then you would load the document by itself to act on itself or load it along with other documents to act on one or more of them. [131073770030] |This is along the lines of the idea of an auto-run macro. [131073780010] |RealTek 8101E ethernet card or similar doesn't work on FreeBSD [131073780020] |Hi, [131073780030] |I'm trying to install FreeBSD on my netbook using an usb sitck but I'm running into a major problem, my ethernet card doesn't work. [131073780040] |ifconfig -a [131073780050] |only returns me [131073780060] |lo0 … [131073780070] |but [131073780080] |dmesg | grep re0 [131073780090] |gives me: [131073780100] |The installer crashes when I try to setup the network on FreeBSD 8.1 amd64 or 8.2RC3 amd64 on usb stick. [131073780110] |Same dmesg messages with 9.0 CURRENT but it doesn't crash nor find re0. [131073780120] |The network works fine with NetBSD 5.1 amd64 or OpenBSD 4.8 i386 using re. [131073780130] |Can someone please help me to solve this ? [131073780140] |Thanks. [131073790010] |What are you using instead of DCE Distributed File System? [131073790020] |What are you using instead of DCE Distributed File System? [131073790030] |How does it compare? [131073790040] |Or are you still using it? [131073790050] |Note that DCE/DFS is not Microsoft Distributed File System [131073800010] |I used Hadoop FS some time ago. [131073800020] |For instance, Hadoop documentation seems to be better than DCE/DFS. [131073800030] |Also, it's developed actively. [131073800040] |Earlier IBM provided support for DCE, but not anymore, at least actively. [131073800050] |Point-to-point comparison is pretty hard, as I couldn't find any good feature lists for DCE/DFS. [131073800060] |For Hadoop, see for example user guide. [131073800070] |Second, Hadoop with MapReduce provide powerful distributed computation platform. [131073810010] |DCE/DFS always had at least 3 strikes against it: [131073810020] |
  • It was outlandishly complicated.
  • [131073810030] |
  • It was costly.
  • [131073810040] |
  • It was proprietary.
  • [131073810050] |I know, they released DCE 1.1 as more-or-less open source, but by then, it was too late. [131073810060] |I've always had good luck with NFS, V3 or later, but then I'm not what you call a demanding user. [131073810070] |I have the impression that a lot of places use Samba servers with CIFS, but I don't have direct experience. [131073820010] |I'm using AFS, NFSv3, NFSv4, and CIFS currently. [131073820020] |CIFS is primarily for supporting Windows clients and I find it less suitable for UNIX/Linux clients since it requires a separate mount and connection for each user accessing the share. [131073820030] |Users can share the same mount point, but they will be seen as the same user on the server-side of the connection. [131073820040] |NFSv3 is primarily used by directories being exported to other UNIX/Linux servers since it's stable and simple to deal with. [131073820050] |With both AFS and NFSv4 I am using Kerberos. [131073820060] |Using NFSv4 on Ubuntu 8.04 and older I found it a bit unstable, but it has steadily improved and I have no stability issues with 10.04+. [131073820070] |It does appear to be a performance bottleneck to use sec=krb5p so I tend to use sec=krb5i or sec=krb5. [131073820080] |One issue I have is how Kerberos tickets are handled with Linux's NFSv4 layer. [131073820090] |A daemon periodically scans /tmp for files beginning with krb5cc_ and matches the ticket up with the file owner. [131073820100] |If a user has more than one ticket they own under /tmp, it will use whichever ticket file is found first when scanning. [131073820110] |I've accidentally changed my identity when temporarily acquiring a ticket for other purposes. [131073820120] |AFS stores tickets in Kernel-space and are associated with a login session normally. [131073820130] |I can login twice as the same Linux user, but still use different AFS credentials on each login without interference. [131073820140] |I also have to explicitly load credentials into the kernel which normally happens automatically during login. [131073820150] |I can safely switch tickets in userspace without interfering with file permissions. [131073820160] |Overall, I like many of the ideas of AFS better than NFSv3/4, but it does have quite a bit smaller of a community developing it compared to NFS and CIFS. [131073820170] |It's also properly known as OpenAFS, AFS is the name of IBM's closed-source offering. [131073820180] |A big difference between AFS and NFS is that AFS is more consistent in it's network protocol and support. [131073820190] |AFS does provide locking in-band instead of using a side-band protocol like NFSv3. [131073820200] |It also offers a more sophisticated ACL system in-between POSIX ACLs and NFSv4/NTFS/CIFS ACLs. [131073820210] |This, unlike the POSIX ACL addition to NFSv3, is a standard part of it's protocol and both Windows and UNIX/Linux clients can access and modify them. [131073820220] |It also doesn't suffer from the 16 group limit that many NFSv3 servers have. [131073820230] |This makes AFS appear more consistent in my mind across Windows and UNIX systems. [131073820240] |Also, since AFS is only accessible via it's network protocol, there aren't issues where the actual underlying filesystem behaves slightly differently from the exported view of it. [131073820250] |For example, in Linux, a file may have MAC or SELinux labels controlling access or other extended attributes that aren't visible over NFS. [131073820260] |AFS, on the other hand just doesn't have extended attributes. [131073830010] |How can I set the processor affinity of a process on Linux? [131073830020] |The question is all in the title: How can I set the processor affinity of a process on Linux? [131073840010] |I have used taskset for this. [131073840020] |If you have taskset installed, something like: [131073840030] |would set the process with id 45678 to have an affinity to cpus 1 and 3. [131073850010] |Inside the process, the call would be sched_setaffinity(), or for pthreads stuff, pthread_setaffinity_np() [131073850020] |On a related note, if you're worrying about CPU affinity of your program, it may be worthwhile to pay attention to how it's doing memory allocation as well. [131073850030] |Larger systems with memory attached to more than one controller (i.e. multiple CPU sockets, each with their own) will have variable latency and bandwidth between different CPU-memory pairs. [131073850040] |You'll want to look into NUMA affinity as well, using the numactl command or the system calls that it works with. [131073850050] |One program I worked on got a 10% performance improvement from this. [131073860010] |You need to install schedutils (Linux scheduler utilities). [131073860020] |I have use it on my Ubuntu Desktop. [131073860030] |SF link [131073870010] |`/srv/stevedore/` vs `/var/stevedore/` for data files for new application? [131073870020] |I am creating a new application. [131073870030] |I currently have all of the server code, configuration, log and data files in directories that do not follow any kind of standard. [131073870040] |Looking at the Wikipedia article on Filesystem Hierarchy Standard I have come up with this new arrangement: [131073870050] |Having used Apache were the data files are located in /var/www/ the pattern would seem to indicate that the data files should be located here instead: [131073870060] |Additionally, the users check out local copies of documents for editing which are located in these directories: [131073870070] |
  • Which is better for a default and why: /srv/stevedore/ vs /var/stevedore/?
  • [131073870080] |
  • How about ~/MyStevedore/ for standard user specific directories?
  • [131073870090] |For more details on the application see the Stevedore Web Site and the code at the time of this posting. [131073880010] |There really isn't much of a difference between /srv/stevedore/ and /var/stevedore/. [131073880020] |What kind of data is it? [131073880030] |If its data files being sent over the webserver, /var/www might be better. [131073880040] |At this point its mostly personal preference among those three. [131073880050] |If the data is the state of your program, then /var/lib/stevedore is even better; just like databases keep their data files there. [131073880060] |As for the user directories, do users actually get a local unix account? [131073880070] |If so, ~ is fine. [131073880080] |If not, then its not a good idea. [131073880090] |It would be too easy for someone to think those users no longer exist, so their home directories can be removed. [131073880100] |A better place would be /var/lib/stevedore/username. [131073890010] |Ultimately, if you're planning on getting people to package this for their distributions, what you'll want to do is establish either a compile-time or configured location for each of these things. [131073890020] |For instance, using autoconf/automake/configure to set macros for --prefix, --bindir, --datadir and so on. [131073890030] |In general, I think most distributions don't use /opt/ for packaged executables (these would go into /usr/bin or /usr/sbin as appropriate). [131073890040] |It seems that users would be expected to use the files in ~/MyStevedore/ on a regular basis, so not creating it as ~/.MyStevedore/ is forgivable, but some distributions may use a file manager that expects everything in ~/MyDocuments/MyStevedore/ or ~/Desktop/MyStevedore/ [131073900010] |
  • If you plan on inclusion in distributions, use the standard buckets (bin, sbin, etc) and avoid /opt *.
  • [131073900020] |
  • I'd be really annoyed to have enforced folders in my home dir. I've just cleaned that stuff up! ;-) Sure you don't want to use /srv/stevedore/users/(user)? [131073900030] |It's a document server, right?
  • [131073900040] |Other rationales: [131073900050] |The webdata can be put in /usr/share/stevedore/web too. [131073900060] |System administrators likely have a custom Apache setup, and the last thing you'd like to see is files in /var/www/htdocs/appname. [131073900070] |Rather, just make sure they are available (and share fits that purpose really well) so the admins can make an Alias in Apache and link the webapp in the proper websites. [131073900080] |You can pick /var/stevedore, or /srv/storedore, that's fine as long as it's configurable. [131073900090] |You can expect Debian users to like /var/*, and SUSE users to prefer /srv/*. [131073900100] |If you go for that route, make sure it's configurable for packagers. [131073900110] |The same also applies to the install root. [131073900120] |That can be /usr, /usr/local, or something else the administrator or packager desires, or it has to be configurable. [131073900130] |Using /var/log/stevedore for logs makes perfect sense. [131073900140] |The packagers want your package to fit in with the rest. [131073900150] |The packagers also make sure your application links to their specific versions of the libraries, so you don't have to ship those as well. [131073900160] |[*] When you put your software in /opt, I'd expect the whole system to be there. [131073900170] |For example, [131073900180] |This is typically used for third party installers, or commercial games. [131073900190] |Those packages usually come in binary format only, and ship all their required libraries with them. [131073900200] |Even the config files are shipped in that folder. [131073910010] |PPTP OpenWRT 10.03 Howto [131073910020] |opkg update; opkg install pptpd kmod-mppe /etc/init.d/pptpd enable /etc/init.d/pptpd start [131073910030] |vim /etc/ppp/chap-secrets #USERNAME PROVIDER PASSWORD IPADDRESS SOMEONE pptpd SOMEPASSWORD 192.168.1.4 [131073910040] |When i want to connect with the networkmanager from a Fedora 14 box that's connected through wifi: it says: failed to connect. [131073910050] |Can someone please post a howto regarding this? [131073910060] |I just need a VPN server on the openwrt side, so that clients could connect to it, so the "channel" is secured(pptp). [131073910070] |Thank you! [131073920010] |What is ANALYZE/CATALOG for Linux msginit binary message catalog? [131073920020] |VMS has an ANALYZE command that examines an item such as an executable image or an object file displaying information about its contents. [131073920030] |Is there such a command for examining the output of msginit which is a binary message catalog file? [131073920040] |Something equivalent to ANALYZE/CATALOG? [131073930010] |info gettext might give you some clues. [131073930020] |I am not sure what do you really need, but msgunfmt looks promising. [131073940010] |What is Linux for VMS DCL "ON ERROR"? [131073940020] |Using VMS DCL command scripts one can catch errors by using the on error command: [131073940030] |How does one do this in Linuxland? [131073940040] |In the example an error with directory or delete will cause control to go to MyErrorHandler. [131073950010] |I think you want the trap function, specifically: [131073950020] |Errors later will jump to the function. [131073950030] |This is supported by at least bash, zsh, and ksh. [131073960010] |Set the language for a single program execution. [131073960020] |Complete C++ i18n gettext() “hello world” example. sets the LANG environment variable using export before executing the program (Linux): [131073960030] |Is there a way to set the language just while executing hellogt, like a command line argument? [131073960040] |This would be handy for testing programs. [131073970010] |You mean something like: [131073970020] |? [131073970030] |Or maybe you mean you want to parse the commandline (argv), find the language passed in, and pass it to setlocale? [131073980010] |In ksh, bash, and similar shells, [131073980020] |will set LANG=es_MX.utf8 only for the invocation of ./hellogt. [131073980030] |More portably, there is a program called env [131073980040] |which will set environment variables and run the program specified. [131073980050] |This works in all shells, including csh and traditional sh (which do not support the first method). [131073990010] |Locate postion then make a change using sed. [131073990020] |This script uses sed to change all "" to "new stuff". [131073990030] |How would one change just the "" after the yyy: using sed or anything else? [131074000010] |I find once things get beyond a certain level of complexity, I switch to perl. s2p will handle the translation of your current sed solution. [131074000020] |Or you could write it from scratch trivially. [131074000030] |The search/replace expression will remain the same. [131074010010] |Answer gleaned from O'Reilly - Sed &Awk 2nd Addition Around page 152 [131074010020] |Write a script in a file [131074010030] |Apply this to your data with the usual [131074010040] |sed -f script sample.txt [131074010050] |This script says look for yyy:. [131074010060] |When found read another line into the pattern buffer (sounds like Star Trek Transporter). [131074010070] |Now do a s/ command on the joined lines. [131074010080] |So we are looking for yyy: newline "". [131074010090] |If found replace with yyy: \ notice backslash actual newline then "new stuff" [131074010100] |Good luck [131074020010] |I might not understand your question. [131074020020] |If you want to replace ONLY the value after 'yyy' then use the previous answer. [131074020030] |If you want to replace ANY values after 'yyy', try this one-liner: [131074020040] |Haven't tested it :D... [131074030010] |Linux editor with VMS EDT like direction mode. [131074030020] |VMS editor EDT allows one to use the keypad to control most of ones editing commands. [131074030030] |One of the rather nice features is that the direction of operation can be set to "up" or "down". [131074030040] |This then effects commands like "move to next character" and "move to start of line". [131074030050] |Another feature is that there are "character", "word" and "line" buffers that one can cut, copy and paste to/from. [131074030060] |I am looking for a Linux editor that has these features? [131074030070] |This is not a request for an EDT editor for Linux. [131074030080] |I am "willing" to learn a new editor if it has these features. [131074040010] |Vim seems to provide all those features. [131074040020] |There are plenty of good tutorials for it on the web, but the easiest way to familiarise yourself with the editor is to install it and then run the vimtutor program supplied. [131074040030] |H - Left J - Down K - Up L - Right [131074040040] |4L - 4 characters right 4W - 4 words right [131074040050] |0 - Start of line $ - End of line gg- Start of file GG- End of file 100gg - Line 100 [131074050010] |You want emacs. [131074050020] |Emacs has an EDT emulation mode (M-x edt-emulation-mode). [131074050030] |This will set up emacs to use the edt keymappings. [131074050040] |Before you can use it, run "emacs -q -l edt-mapper". [131074050050] |This will let you set up what keys on YOUR keyboard map to the various VT keys(gold, do, etc.). [131074050060] |It works quite well, and you have the extra functionality of emacs, plus the edt keys you're used to. [131074050070] |EDIT: I should look at dates, this was asked ages ago...but the information is still good. [131074060010] |Emacs's Picture mode is designed to facilitate drawing ASCII diagrams and tables. [131074060020] |You can change the direction in which the cursor moves after inserting a character with C-c left, C-c down, etc. [131074070010] |What is Linux for SET FILE/ERASE_ON_DELETE? [131074070020] |In VMS one may tell the file system to write junk over the existing contents of a file when it is deleted. [131074070030] |Here is the DCL command to identify the file for this kind of treatment: [131074070040] |This allows the policy to be set at one point in time then later users of the file do not have to handle that detail of security. [131074070050] |A standard delete which takes the file name out of the directory and frees the space for another file to use will also modify the existing contents to prevent the next user from reading it. [131074070060] |The normal delete: [131074070070] |What is Linux for this? [131074080010] |I am not sure if this is what you are looking for: [131074080020] |It writes random bytes to FILE. [131074090010] |This is supported only by some Linux filesystems: [131074090020] |may (or may not) do what you want. [131074090030] |From "man chattr": [131074090040] |I do not know which specific mainline kernel versions (if any) implement this. [131074100010] |Note that with current technology you'll sometimes have no control over that. [131074100020] |With SSD disks each write can be in done in different location, keeping old data... and this cannot be overriden by OS, filesystem or anything in software. [131074100030] |More on http://www.anandtech.com/printarticle.aspx?i=3531. [131074110010] |The closest equivalent you'll typically find on unix systems is encryption. [131074110020] |An easy way to set up an encrypted directory on Linux (and most other unices) is Encfs. [131074110030] |Quickstart: [131074110040] |There are several other options for filesystem encryption. [131074110050] |See How to best encrypt and decrypt a directory via the command line or script?, Best way to do full disk encryption?, and several other threads on Super User, Server Fault and Ask Ubuntu. [131074110060] |I don't know what threats FILE/ERASE_ON_DELETE protects against. [131074110070] |Note that on unix, the former contents of a rewritten or deleted file can only ever be seen by the system administrator or someone with physical access to the drive: there's no “fast file creation mode” that would populate a file with random data that just happened to be in the disk region used by the file. [131074120010] |What is Linux for $DECK and $EOD? [131074120020] |In VMS DCL one may embed data in a command file using $DECK and $EOD. [131074120030] |What is Linux for this? [131074130010] |You can embed data within shell scripts. [131074130020] |How this works is shell dependent. [131074130030] |However bash/perl etc. do this in similar ways, using a heredoc. e.g. in bash (and similar): [131074130040] |will write as input a,b,c up to the EOF, and then cat will write that out to sample.txt. [131074130050] |Note that EOF is a convention and you can use any label. [131074130060] |Try the above on the command line to see more clearly what's going on. [131074140010] |Loki also made a tool called makeself. [131074140020] |This tool can embed compressed data in a shell file and is useful for self-extracting installation scripts. [131074150010] |Lightweight custom linux build [131074150020] |Hey, I am looking for a linux build which takes up very little memory to bootup. [131074150030] |I don't need any of the UI modules. [131074150040] |I need help on choosing from the ones currently available or pointers on building my own. [131074150050] |I looked at some linux distros like archlinux and damn small, but I haven't decided yet. and also it would be great if someone could help me on how to run a custom program immediately on bootup. [131074150060] |Thanks for the help, in advance. [131074160010] |you can custom build using site . http://www.instalinux.com after customizing you can download .iso file. [131074170010] |It is generally possible to roll a system with Busybox; busybox's web site details how to do this. [131074170020] |A statically linked busybox binary will require just a couple of megs of memory (over what the kernel requires, of course). [131074170030] |I've been able to boot and log into a machine with 8M of ram. [131074170040] |However, it is relatively complicated to get all the system services you may require working, using a small existing distribution might be better. [131074170050] |How much is "little memory"? [131074170060] |Are you on a really tiny embedded system? [131074170070] |Unless you have less than 64M, or your process needs to use a lot of the available ram (and no swap), I'd recommend going with a minimal standard distro. [131074170080] |Edit: The "buildroot" tool is a companion of Busybox which helps you to build very small usable filesystems. [131074180010] |You could go with Arch linux, but that doesn't strictly meet your "custom" distro, I think. [131074180020] |I'd go with Linux From Scratch. [131074180030] |That's not really a distro, but rather a system for building your own distro. [131074180040] |I think you'll find you have some "fat" in your system when you're done, as it has you building and installing Tcl/Tk (or at least it used to) and a few other things that aren't strictly necessary, but let you run test cases semi-automatically. [131074190010] |Try TinyCore (or MicroCore even). [131074190020] |TinyCore is at 10MB (ISO) and MicroCore at 6MB. [131074190030] |TinyCore has X and a minimal GUI, while MicroCore is text mode only. [131074190040] |I use it on a 12 year old laptop with 199MHz and 32MB RAM. [131074190050] |Works perfect, even with WLAN, etc. [131074190060] |TinyCore is made with customization abilities in mind. [131074190070] |You can easily fork your own minimal dristro from TinyCore. [131074190080] |To facilitate this, there's even a remastering how-to in the Wiki. [131074200010] |Implications of Linux support for AMD Fusion APUs? [131074200020] |I am a newcomer to this, so please excuse my ignorance. [131074200030] |But that's what questions are for right? :p [131074200040] |As I understand it, the Linux kernel has had good support for Intel and AMD CPUs (pretty obvious since your OS installs and runs fine!). [131074200050] |But now that AMD is releasing their new Fusion APUs, is it just a gimmick marketing scheme and can be treated as a CPU by the Linux kernel, or is this APU something new and new kernel support needs to be added? [131074200060] |Since the Fusion APUs are slated to include the functions of the GPU, will Linux be able to take advantage of all its functions? [131074200070] |This might have implications on whether my next Linux machine can and/or should be based on AMD Fusion hardware or not. [131074200080] |Thanks for your answers. [131074210010] |As I see it the APU is a combination of the CPU and GPU integrated into one thing, so support should be fairly easy. [131074210020] |I don't know about specific details, but AMD said that APU is fully supported in Linux. [131074220010] |Kernel 2.6.38 and above will support AMD Fusion Ontario and Zacate APUs. [131074230010] |Using top to see processes run by a user on behalf of sudo [131074230020] |If I run top -u username I will see all the processes by a particular user. [131074230030] |Is there a way to also see all the processes that the user called via sudo? [131074240010] |It doesn't seem to be possible in an easy way. [131074240020] |From top's perspective, any command a user runs using sudo would appear to be running as root because it really is running as root. [131074240030] |One way you could try, is to track it down to the terminal where the user is logged in, then see processes running as root on that terminal. [131074240040] |For example, [131074240050] |Note the user is on pts/0. [131074240060] |Now run top. [131074240070] |Now press f (field select), then g (toggle controlling tty field), then Enter. [131074240080] |Now watch for processes with pts/0 in the TTY column. [131074240090] |You can also sort by TTY by pressing g a second time. [131074240100] |Or you could use procfs to get a list of pids, e.g. [131074240110] |Then do anything with that list. [131074240120] |Even use it to run top -p ,.... [131074240130] |Of course, in that case, top won't show you if that user starts a new command using sudo. [131074240140] |Also don't forget that a user running a command is probably being logged, e.g. to /var/log/secure or /var/log/auth.log, or /var/log/sudo.log, or whatever your system uses. [131074250010] |You could install htop and see if it gives you a better overview. htop supports filtering by user as well. [131074260010] |Why does locale es_MX work but not es? [131074260020] |Wikipedia entry for GNU gettext shows an example where the locale is just the lanuage, "fr". [131074260030] |Whereas the 'i18n gettext() “hello world” example' in SO has the locale value with both the language and country, "es_MX". [131074260040] |I have modified the "es_MX" example to use just the lanuage, "es". [131074260050] |This covers making an "es" rather than "'es_MX'" message catalog and invoking the program with environment variable LANG set to "es".But this produces the English text rather the expected Spanish. [131074260060] |According to Controlling your locale with environment variables: [131074260070] |environment variable, LANGUAGE, which is used only by GNU gettext ... [131074260080] |If defined, LANGUAGE takes precedence over LC_ALL, LC_MESSAGES, and LANG. [131074260090] |produces the expected Spanish text rather than English. [131074260100] |But this does not explain why "LANG=es" does not work. [131074270010] |I'm not really sure about this, but I've been working with Joomla and others CMS and the code for Spanish - Spain is: es_ES [131074280010] |Might this be because Spanish is spoken in many different countries and may have variations and quirks between dialects? [131074280020] |Same as en_US, en_CA, or en_GB etc. [131074280030] |In fact, here are your options - I think you can guess most of the countries (AR=Argentina, BO=Bolivia, CL=Chile etc) [131074290010] |The locale you use must be generated in the system. [131074290020] |Use "locale -a" to see all generated locales. [131074290030] |Locale source files must be present under /usr/share/i18n/locales/, and as far as I can see, all are of type 'language_COUNTRY'. [131074290040] |If you really must use 'es' locale, you can prepare necessary files, you can modify /etc/locale.gen to include 'es' and run locale-gen to generate it. [131074290050] |Otherwise, use an 'es' locale with a country. [131074300010] |Wikipedia is probably not the best reference for stuff like this. [131074300020] |It usually has very simple examples that may not be widely applicable, constructed for understanding concepts more than for practical considerations. [131074300030] |Why not use gnu's own documentation? [131074300040] |http://www.gnu.org/software/gettext/manual/gettext.html#Setting-the-POSIX-Locale [131074300050] |You can set LANGUAGE to "es" (or even "es:fr:en" for a priority list), but LANG would still need to be set to es_MX or something like that. [131074300060] |The docs explain it fairly clearly. [131074310010] |From Zac Thompson's link to GNU gettext utilities section 2.3 Setting the Locale through Environment Variables the sub-section The LANGUAGE variable: [131074310020] |In the LANGUAGE environment variable, but not in the other environment variables, ‘ll_CC’ combinations can be abbreviated as ‘ll’ to denote the language's main dialect. [131074310030] |For example, ‘de’ is equivalent to ‘de_DE’ (German as spoken in Germany), and ‘pt’ to ‘pt_PT’ (Portuguese as spoken in Portugal) in this context. [131074310040] |Makes the point that "es" is an abbreviation that only LANGUAGE but not LANG supports. [131074320010] |Default username for Samba share that is not the user name on the client system. [131074320020] |When mounting a Samba share the user name is defaulting to the user name from the client machine rather than the "User Name" field from the earlier "Connect to Server" dialogue. [131074320030] |Accessing a Samba share over ssh from Linux with Nautilus where the client user name is "lfm" and the user name on the server system is "lastfirstmiddle": [131074320040] |The user home share does not exhibit the problem. [131074320050] |The password dialog is using the user name as specified in the "Connect to Server" dialogue as expected: [131074320060] |Using the "Connect to Server" dialogue one can get a list of "Windows Shares" by leaving the "Share" field blank. [131074320070] |Then select a share and "Open with Open Folder" produces the "Connect to Server" dialog which unlike the previous case defaults the "User name" to the user on the client system. [131074320080] |It does not pick up the value used in the "Connect to Server" dialogue that produced the list of shares. [131074320090] |One can use "Connect to Server" and specify a "Bookmark" which can be used later to mount a share without having to compete the "Connect to Server" dialog each time. [131074320100] |To have access to all six shares listed above (ABCXYZ) one would need to create six bookmarks. [131074320110] |This might be OK for six shares but if there are dozens of shares this would be a bit obnoxious. [131074320120] |Is there a way to change the default user name to something other than the client system's user name? [131074330010] |You might be using share-level security (security = share) in your smb.conf file. [131074330020] |In share-level security, Samba uses the share name as the username for the connection and does not ask for a username in the protocol. [131074330030] |This is basically how Window 98 worked. [131074330040] |You probably want security = user and you will need a proper smbpasswd file as well as Samba can't use the normal UNIX password database (/etc/passwd or /etc/shadow). [131074330050] |Use smbpasswd -a lfm to add a new user for lfm and set it's password, set security = user in smb.conf and restart Samba. [131074340010] |In new users's home directory create sub-directory with a specific group and permissions. [131074340020] |Linux will copy the the contents of /etc/skel when a new user is created. [131074340030] |I want to have a sub-directory in each user's home directory, MyStevedore. [131074340040] |I want this directory to have the owner be the new user and the group to be the group stevedore with the permissions drwxrwxr-x. [131074340050] |The user is not a member of the group stevedore. [131074350010] |You could add an if statement to the .bash_profile script in /etc/skel that will check if the folder exists. [131074350020] |If it doesn't exist it will create it and set the permissions. [131074350030] |The first time a new user logs in the folder will be created. [131074360010] |Assuming you're using adduser to create the user, it will do most of the job, assuming you've created a directory /etc/skel/MyStevedore with your desired permissions. [131074360020] |However on most systems ~/MyStevedore will always belong to the user's primary group. [131074360030] |On Debian and derivatives (including Ubuntu), once adduser has created the user, it calls /usr/local/sbin/adduser.local if it exists. [131074360040] |You can use it to complete the job. [131074370010] |load program/module on booting [131074370020] |I'm trying to load a program I wrote, on booting. the program is also a module.ko and a small bash script. for the module I tried by doing depmod mymodule.ko and modprobe -a and massing around with modprob libaries and .conf files without any success. [131074370030] |So, I warped it all (module and my executed program) in a bash. [131074370040] |I tried to load with on booting with rc.d. [131074370050] |I failed this one too cause I think rc.d run only executed files and not bash. [131074370060] |If I'm right how do I change my bash file to executed one? [131074370070] |And install it in rc.d tool? [131074370080] |Is my strategy right? [131074370090] |Thank you all in advance :) [131074370100] |*working on Linux centos [131074380010] |For your module, normally you'd normally put that in /etc/modprobe.conf but you can also put it in /etc/rc.modules. [131074380020] |For your script, if you want to just execute it once when the server boots, it can be put in /etc/rc.d/rc.local (although it is also executed when changing run levels). [131074380030] |If you're looking for a more complex service you can start and stop or run at various run levels, you want a System V Init script [131074390010] |What firmware works with a D-Link DIR-600? [131074390020] |Hi, I'm planning on getting a D-Link DIR-600 to be used as a WLAN access point, and for WEP/WPA certificate management. [131074390030] |I know it works with DD-WRT, and OpenWRT, but not Tomato. [131074390040] |Now, I've been looking what firmware I can put on that device, prior to getting it. [131074390050] |Tomato is my favorite option, but since it doesn't work with the router, I'm getting second thoughts about getting it at all. [131074390060] |If you happen to have a better suggestion for a router, please do so, but I'd like to stay in the same price range as the DIR-600 (around 25€). [131074400010] |Youtube Videos become Choppy When Maximised [Ubuntu Maverick] [131074400020] |Hi there, I just installed 32-bit Ubuntu Maverick 10 Stable everything is working fine, The system spec's are as below: [131074400030] |3 Ghz Intel DG101 512 MB RAM 80 GB HDD 256 MB Ati Radeon Xpress [131074400040] |the only problem is when the Youtube videos are maximized they consume allot of CPU plus video bacomes slower and choppy..... [131074400050] |What to do? [131074400060] |I have FireFox and installed the FlashPlayer 10 also. [131074400070] |But no luck...Any Ideas how to fix? [131074400080] |The videos work absolutely good in XP in same computer..... [131074400090] |I have tried Google Chrome also but no luck in that also.... [131074400100] |Any answer 'll be appreciated. [131074410010] |To find proprietary driver for ATI card head to the ATI download page and check if a Linux driver is available for your model. [131074410020] |Alternatively, you can use Ubuntu's driver finder by going to System -> Administration -> Hardware Drivers. [131074410030] |If installing the driver doesn't help try the newest version of Flash by getting it from Adobe. [131074420010] |He he... [131074420020] |You guys are funny . . [131074420030] |It s very simple. [131074420040] |It will work in 320p when the screen is maximised. [131074420050] |Change it from 480p to 320p which is near the volume control when the screen is maximised. [131074420060] |All the best watching Youtube!! [131074420070] |Regards, Aravindh :) [131074430010] |Unix file naming convention [131074430020] |Hi. [131074430030] |I was wondering what is the naming convention for files in Unix? [131074430040] |I am not sure about this, but I think there is perhaps a universal naming convention that one should follow? [131074430050] |For example, I want to name a file say: backup with part 2 and random [131074430060] |Should I do it like this: [131074430070] |backup_part2_random [131074430080] |OR [131074430090] |backup-part2-random [131074430100] |OR [131074430110] |backup.part2.random [131074430120] |I hope the question is clear. [131074430130] |Basically, I want to choose a format that conforms to the Unix philosophy. [131074440010] |Characters you should not use in filenames: [131074440020] || ; , ! @ # $ ( ) <>/ \ " ' ` ~ { } [ ] = + &^ [131074440030] |Character delimiters you should use to make names easier to read: [131074440040] |_ - . : [131074440050] |(In some cases ":" has special meaning though) [131074450010] |In unix filename is just a string, unlike DOS, where filename was composed from name and extension. [131074450020] |So any of given filenames is completely acceptable. [131074450030] |But many programs still use file suffixes beginning with dot to distinguish different file types, i.e. Apache Web Server uses suffixes to set correct MIME type in answer headers. [131074460010] |Far more important that a particular convention is being consistent. [131074460020] |Pick a style, and stick with it. [131074470010] |Stick to alphanumeric filenames. [131074470020] |Avoid spaces or replace spaces with underscores ( _ ). [131074470030] |Limit punctuation in file names to periods (.), underscores ( _ ), and hyphens (-). [131074470040] |Generally filenames are lowercase, but I use CamelCase when I have multiple words in the filename. [131074470050] |Use extensions which indicate the type of file. [131074470060] |Programs do not need extensions as the execute bit is used to indicate programs, and the shells know how to run programs of various types. [131074470070] |It is common but not required to (.sh) for shell scripts, and (.pl) for perl scripts. [131074470080] |The Windows executable extensions .bat, .com, .scr, and .exe indicate Windows executables on Unix. [131074470090] |Pick a standard and stick to it. [131074470100] |But it won't break things if you avoid it. [131074470110] |Hidden (or dot) files have names starting with a period. [131074470120] |These normally don't show up in directory listings. [131074470130] |Use 'ls -a' to include the dot files in the list. [131074480010] |To add to what others have said, I'd just say that while accented letters and many special characters are legal in filenames they can cause issues in any of the following scenarios: [131074480020] |
  • You share your filesystem with other computers, particularly with different operating systems;
  • [131074480030] |
  • You share files with others (and although email tends to be quite good with conversions, sometimes it just does not work);
  • [131074480040] |
  • You use shell scripts to automated some tasks (spaces are particularly problematic, though there are many ways to deal with them);
  • [131074480050] |
  • You use a file share from another computer.
  • [131074480060] |... [131074490010] |To add to what everyone else has said: [131074490020] |1-Even though Linux doesn't care much about extensions, Windows does, so make sure any file you ever plan on giving anyone has the appropriate extension. [131074490030] |2-Camel caps seems to be the easiest to use scripts with, no special characters to worry about escape sequences. [131074500010] |. is used to separate a filetype extension, e.g. foo.txt. [131074500020] |- or _ is used to separate logical words, e.g. my-big-file.txt or sometimes my_big_file.txt. - is better because you don't have to press the Shift key, others prefer _ because it looks more like a space. [131074500030] |So if I understand your example, backup-part2-random or backup_part2_random would be closest to the normal Unix convention. [131074500040] |CamelCase is normally not used on Linux/Unix systems. [131074500050] |Have a look at file names in /bin and /usr/bin. [131074500060] |CamelCase is the exception rather than the rule on Unix and Linux systems. [131074500070] |(NetworkManager is the only example I can think of that uses CamelCase, and it was written by a Mac developer. [131074500080] |Many have complained about this choice of name. [131074500090] |On Ubuntu, they have actually renamed the script to network-manager.) [131074500100] |For example, on /usr/bin on my system: [131074500110] |and even then, none of the files starting with a capital uses CamelCase: [131074510010] |My take on Unix/Linux filename conventions: [131074510020] |
  • Unix/Linux filesystems don't inherently support the notion of an extension. [131074510030] |The concept of a file extension completely exists as something supported by utilities such as cp, ls, or the shell you are using. [131074510040] |I believe it is this way on NTFS as well, but I could be wrong.
  • [131074510050] |
  • Executables, including shell scripts, usually never have any type of extension. [131074510060] |Scripts will have a hashbang line (i.e. #!/bin/bash) that identifies what program should interpret it.
  • [131074510070] |
  • Any executable that is two letters long is super important. [131074510080] |So don't name your executables two-letter filenames. [131074510090] |Any file in /etc ending in tab is also super important, such as fstab, mtab, inittab.
  • [131074510100] |
  • Sometimes .d is appended to directory names, particularly in /etc, but this isn't widespread (UPDATE: http://serverfault.com/questions/240181/what-does-the-suffix-d-mean-in-linux)
  • [131074510110] |
  • rc is widely used for configuration scripts or files, either prepending (e.g., rc.local) or suffixing (.vimrc)
  • [131074510120] |
  • The Unix/Linux community has never had a three-character limit on extensions and frowns upon shortening well know extensions to fit. [131074510130] |For example, don't use .htm at the end of HTML files on Unix/Linux, use .html.
  • [131074510140] |
  • In a set of files, a filename is sometimes capitalized, or in all caps, so it appears at the head of a directory listing. [131074510150] |The classic example is Makefile in source packages. [131074510160] |Only do this for stuff like README.
  • [131074510170] |
  • ~ is used to identify a backup file or a directory, as in important_stuff~, or /etc~. [131074510180] |Many shells will expand a lone ~ to $HOME.
  • [131074510190] |
  • Library files almost always begin with lib. [131074510200] |Exception is zlib and probably a few others.
  • [131074510210] |
  • Scripts that are called by inetd sometimes are tagged with a leading in., such as in.tftpd.
  • [131074510220] |
  • The ending z in vmlinuz means zipped, but I've never seen any other file named this way.
  • [131074520010] |How long should it take to generate 300 bytes of entropy on a VPS? [131074520020] |I'm running NetBSD on a Xen VPS, and I'm trying to generate a gpg keypair. [131074520030] |I've gotten most of the way there, but now I'm getting the following error message: [131074520040] |This has been the status for about 10 hours now, and I've been doing things like installing packages from source in another session. [131074520050] |Is the process hung? [131074520060] |Is this a known-issue of some sort? [131074520070] |Can it really take that much effort to generate 300 bytes of entropy? [131074520080] |Thanks. [131074520090] |UPDATE: The source of this issue is that NetBSD domU's don't have an entropy source enabled by default. [131074520100] |You should manually enable the network interfaces as a source of entropy using the rndctl utility. [131074530010] |It should not. [131074530020] |I would try restarting the process. [131074530030] |In addition to that, there is not so much you can do, in addition to digging to source code to see what is wrong. [131074530040] |As you probably already know, if your server is completely empty, it is possible that generating random data takes long time. [131074530050] |However, installing packages and poking around should be enough to fix that. [131074540010] |Is there an easier way to manipulate GRUB 2 entries? [131074540020] |Using GRUB 2 is harder than GRUB 1 for these use cases: [131074540030] |
  • It looks like if I want to reorder GRUB 2 menu entries appear in the selection window, I have to rename the files in "/etc/grub.d/" directory.
  • [131074540040] |
  • If I have to change boot order, I first have to look in "/boot/grub/grub.cfg", check when the entry I want to be default appears, then set the GRUB_DEFAULT parameter in "/etc/default/grub" to match it (counting from 0).
  • [131074540050] |The old GRUB used to allow me to do all of this by just moving text entries in "/boot/grub/menu.lst" around. [131074540060] |This much-simpler way kept me using GRUB 1 for a while. [131074540070] |This makes me wonder if there's a specialized tool to make all of this easier. [131074550010] |Grub 2 is horrid. [131074550020] |I threw it out and use extlinux (part of syslinux package) from now on. [131074550030] |To answer your question, there is no easier way. [131074560010] |Daniel Robbins,creator of Gentoo Linux, has been working on something called "Boot-Update" for Funtoo. [131074560020] |I've not tried it. [131074560030] |Seems to be what you are looking for. [131074560040] |http://docs.funtoo.org/wiki/Funtoo_Boot-Update [131074570010] |How to know the types of windowing system, window manager and desktop environment of a Unix-like OS [131074570020] |I was wondering what commands/utilities can be used in terminal to know the types of windowing system (such as X window system), window manager (such as Metacity, KWin, Window Maker) and desktop environment (such as KDE, Gnome) of a Linux or other Unix-like operating systems? [131074570030] |Thanks! [131074580010] |With difficulty. [131074580020] |There is no centralized system for keeping track of these things. [131074580030] |
  • On Debian-derived Linuxes you might try the alternatives system.
  • [131074580040] |
  • You could query the package manager, and if you find only one Foo installed, you can be pretty sure which Foo is in use.
  • [131074580050] |
  • You could try parsing the output of ps. [131074580060] |Or equivalently of reading /proc on systems that have it.
  • [131074580070] |Possibly the most reliable thing is to ask the user. [131074590010] |How do I migrate configuration between computers with different hardware? [131074590020] |I want to migrate the configuration of an Ubuntu desktop to a new box with different hardware. [131074590030] |What is the easiest way to do this? /etc/ contains machine and hardware specific settings so I can't just copy it blindly. [131074590040] |A similar problem exists for installed packages. [131074590050] |edit: This is a move from x86 to x86-64. [131074600010] |Really a lot of the Windows voodoo regarding drivers, the registry, and being sensitive to motherboard changes is less severe on Linux if you are using a generic kernel with all drivers as modules, which is the usual situation for Ubuntu. [131074600020] |These are the only things in /etc that are dependent on the hardware that I know of: [131074600030] |
  • If you have proprietary graphics drivers installed, I would think these can be a problem.
  • [131074600040] |
  • I've swapped a hard drive with Debian installed from an old HP Pavilion (500Mhz cpu, quite old) to a slightly newer MSI KT4V board. [131074600050] |The only issues I had was my network interface names were messed up. [131074600060] |But this affected me more than the usual user because this install was explicitly for use as a router.
  • [131074600070] |
  • Another thing that might be affected is lm-sensors, if you use it. [131074600080] |This is motherboard specific, but you can just run sensors-detect to fix that.
  • [131074600090] |
  • If you change the device Linux expects its root partition to be, or if any of the device/partitions pointed to in /etc/fstab change, i.e. you are moving from a PATA drive to a SATA, then you must update this otherwise Linux will have problems.
  • [131074600100] |If the GPU is the same, the drive controller is the same type, and you don't have a bunch of homemade scripts dependent on the names of your network interfaces, I don't forsee major issues. [131074610010] |[adding onto this excellent answer] [131074610020] |I see that you mention concern for installed packages. [131074610030] |By this, I suppose you mean that you are going to be tranferring a disk from one machine to another. [131074610040] |Assuming that your two machines are x86 architecture, the only problem I can think of that can happen is if your installation is 64-bit and your new machine isn't. [131074610050] |If things are the other around, there shouldn't be a problem. [131074620010] |Here's how to get everything except what you've manually configured: [131074620020] |Edit these files as necessary for anything that's arch dependent (e.g., linux-image), but I don't think there will be much. [131074620030] |Copy these files to the new system then run: [131074620040] |You'll also want to copy (preferably with rsync) /home and any other data directories to the new system. [131074620050] |The only thing left will be config files from major packages (e.g., apache, bind, cronjobs, et al.). [131074630010] |First, if you're going to keep running 32-bit binaries, you're not actually changing the processor architecture: you'll still be running an x86 processor, even if it's also capable of doing other things. [131074630020] |In that case, I recommend cloning your installation or simply moving the hard disk, as described in Moving linux install to a new computer. [131074630030] |On the other hand, if you want to have a 64-bit system (in Ubuntu terms: an amd64 architecture), you need to reinstall, because you can't install amd64 packages on an i386 system or vice versa. [131074630040] |(This will change when Multiarch comes along). [131074630050] |Many customizations live in your home directory, and you can copy that to the new machine. [131074630060] |The system settings can't be copied so easily because of the change in processor architecture. [131074630070] |On Ubuntu 10.10 and up, try OneConf. [131074630080] |OneConf is a mechanism for recording software information in Ubuntu One, and synchronizing with other computers as needed. [131074630090] |In Maverick, the list of installed software is stored. [131074630100] |This may eventually expand to include some application settings and application state. [131074630110] |Other tools like Stipple can provide more advanced settings/control. [131074630120] |If you do things manually, the main thing you'll want to reproduce is the set of installed packages. [131074630130] |See Ubuntu list explicitly installed packages and the Super User and Ask Ubuntu questions cited there, especially Telemachus's answer. [131074630140] |In a nutshell: [131074630150] |For things you've changed in /etc, you'll need to review them. [131074630160] |Many have to do with the specific hardware or network settings and should not be copied. [131074630170] |Others have to do with personal preferences — but you should set personal preferences on a per-user basis whenever possible, so that the settings are saved in your home directory. [131074630180] |If you plan in advance, you can use etckeeper to put /etc under version control (etckeeper quickstart). [131074630190] |You don't need to know anything about version control to use etckeeper, you only need to start learning if you want to take advantage of it to do fancy things. [131074640010] |Google Chrome Cache [131074640020] |Google Chrome used to store Youtube videos in /tmp, but not since the last two versions. [131074640030] |Nor could I find those files in ~/.cache/google-chrome. [131074640040] |Googling for this query produces Windows-specific results. [131074640050] |Does anyone have any idea about where these files are stored? [131074640060] |I am using Fedora 14. [131074650010] |You can adapt the script found here it works for me using Google Chrome on Debian. [131074660010] |Use this bash script to get a list with all temporarily saved flashvideos: [131074660020] |Mark the script as executable and run, for example to view the videos, the following: [131074660030] |Sorry for my bad english, but I try to explain: Since Flash 10.1 all /tmp file system entries getting deleted as soon as the flash player opens them. [131074660040] |But the file itself still exists, since the kernel only deletes the file if the hardlinks doesnt exists anymore. [131074660050] |Only the flashplugin knows where the file/video is. Luckily, the kernel can tell us which process has which filehandle open. [131074660060] |So, there are still hardlinks for these files located at /proc/$PID/fd. [131074670010] |They are stored in /home//.cache/chromium/Default/Cache [131074680010] |Hung system call [131074680020] |So I'm working with a custom kernel module that I'm writing a python front end for. [131074680030] |The kernel module works, and it adds a framebuffer device file to /dev/fb1. [131074680040] |I can read and write to it fine. [131074680050] |I've been using python's mmap module to map the device buffer and that seems to work great. [131074680060] |Now I'm trying to implement numpy, and I'm using numpy's memmap function which, my assumption is, should work similarly. [131074680070] |The problem is that opening the device file using numpy's memmap function hangs the kernel (I think). [131074680080] |This is what I'm doing to initially open the file [131074680090] |The process hangs, and I can't kill python except through killall python which presumably leaves the file resource open. [131074680100] |Any subsequent accesses to open the file again hang indefinitely, simply doing [131074680110] |and I get this in dmesg [131074680120] |I guess my question is, can I manually kill the system call? [131074680130] |Or somehow get the mutex unlocked? [131074680140] |Or am I totally missing what the error is telling me. [131074680150] |The weird thing is that even just the memmap call corrupts the framebuffer and writes garbage to my display. [131074680160] |I'm guessing this is just numpy not working well with device files. [131074680170] |Update: [131074680180] |Tis is the output from ps -l. [131074680190] |The first python is the one that originally ran the numpy memmap call (at least I'm fairly sure). [131074680200] |The second ipython is after the first process hung running a simple plain python open call. [131074690010] |Convert links of downloaded website [131074690020] |I downloaded a website using: [131074690030] |for offline viewing and I just remembered that I forgot the --convert-links option! [131074690040] |They are all on my hard drive right now. [131074690050] |Is there a way to do --convert-links without redownloading the whole website? [131074700010] |Also, don't forget to use the option --timestamping. [131074700020] |Do either that, or add timestamping = on to "~/.wgetrc". [131074700030] |It ensures that when you re-mirror the website, you don't re-download the whole website, but only changed/new files. [131074700040] |See the section Time-Stamping in wget's manpage for more. [131074700050] |FWIW I use this to mirror my blog: wget --mirror --adjust-extension --convert-links --no-cookies --timestamping http://my-blog.net -o log-blog [131074710010] |Straightforward one: serve local directory with something like SimpleHTTPServer, then re-wget from localhost with appropriate options.