[131090690010] |Why does "xdg-open" fail although "xdg-mime query defaut " succeeds on Ubuntu 10.10? [131090690020] |On Ubuntu 10.10, xdg-open fails to open the file and give me the error: [131090690030] |No application is registered as handling this file [131090690040] |But xdg-mime query defaut ... succeeds for the mime type. [131090690050] |Why? [131090690060] |Here is my process: [131090690070] |
  • Added a new mime type application/vnd.xx by xdg-mime install mytype.xml. [131090690080] |Then xdg-mime query filetype shows that the new mime type is recognized.
  • [131090690090] |
  • I wrote my desktop entry file "my-app.desktop" like this: [131090690100] |
  • I copied this desktop file to ~/Desktop. [131090690110] |After I re-login, I saw the shortcut on the desktop and xdg-mime query defualt application/vnd.xx printed out this desktop file.
  • [131090690120] |
  • But, xdg-open fails with the error: [131090690130] |No application is registered as handling this file
  • [131090690140] |I've installed nautilus. [131090690150] |Do I miss something? [131090690160] |How do I fix this? [131090700010] |I don't know off the top of my head if this is the cause of your problem, but in general application *.desktop files need to be in specific places to be fully recognized. [131090700020] |Try moving your my-app.desktop to ~/.local/share/applications/my-app.desktop (create that directory first if needed: mkdir -p ~/.local/share/applications). [131090700030] |If you used a full pathname to the *.desktop file, change it to just the basename; I don't think pathnames work as expected there. [131090710010] |why “xdg-open” fails although “xdg-mime query defaut ” succeeds on Ubuntu 10.10? [131090710020] |Possible Duplicate: Why does “xdg-open” fail although “xdg-mime query defaut ”succeeds on Ubuntu 10.10? [131090710030] |(Sorry to post the question again for text format reason) [131090710040] |On Ubuntu 10.10, "xdg-open" fails to open the file and give me an error ""No application is registered as handling this file" error ". [131090710050] |But "xdg-mime query defaut ..." succeed for the mime type. [131090710060] |Why? [131090710070] |Here is my process: 1. added a new mime type "application/vnd.xx" by "xdg-mime install mytype.xml" Then "xdg-mime query filetype " shows the the new mime type is recognized. [131090710080] |2.I wrote my desktop entry file "my-app.desktop" like this: [Desktop Entry] Name=xxx Comment=xxx Icon= Exec=/usr/bin/my-app %U Terminal=false Type=Application Categories=Utility; MimeType=application/vnd.xx; [131090710090] |3.I copied this desktop file to ~/Desktop, after I re-login, I saw the shortcut on the desktop and "xdg-mime query defualt application/vnd.xx" print out this desktop file. [131090710100] |4.but, xdg-open fails with the error "No application is registered as handling this file" error"! [131090710110] |I've installed nautilus. [131090710120] |Do I miss something? [131090710130] |Could anybody share me some idea? [131090710140] |Great thanks! [131090710150] |Amanda [131090720010] |Establish openvpn tunnel in bash script [131090720020] |I'm trying to write a script which will establish an openvpn tunnel when the computer boots up. [131090720030] |The main problem lies in inputting the pkcs12 password. [131090720040] |I realise it's very bad practice to have a password stored in plain text, but I'm not too fussed about that -- the computer is very secure in all other respects so I'm pretty confident that nobody but me will be accessing it to view the password. [131090720050] |I have added the --management and the --management-query-passwords options so that the password can be input via a telnet session. [131090720060] |This works fine when I do it manually, but when I try and do it automatically with a bash script it fails. [131090720070] |My guess would be that either I am not doing the carridge return after the password line properly, or that some other garbage values are sneaking in to the telnet session as inputs. [131090720080] |Here is the relevant bits of code (xxx for stuff that is classified): [131090720090] |Obviously this is not working -- the openvpn telnet management interface is still waiting for the password to be entered. [131090730010] |I would expect all it is looking for is the password for the private key. [131090730020] |Try using echo -e "xxxxxxxxx\r\n" or echo "xxxxxxxx". [131090730030] |You may want to try using expect to respond to the password request. [131090730040] |Some password programs look for the password on a tty type device. [131090730050] |The program expect handles this. [131090730060] |You may be better off looking for an rc.d init script to start your tunnel. [131090730070] |This is the normal method for starting things at startup. [131090740010] |ok i have managed to get the openvpn tunnel password entered automatically and also managed to get the tunnel to run on bootup. hopefully this helps someone else who is trying to do the same thing - coz its taken me over 20 hours to figure out something which now looks pretty basic. code: [131090740020] |you may also want to redirect all output to a file so that if it fails you will be able to see why. i called the file ZZcreate_ovpn_tun.sh to make sure it was run last out of all of the scripts in the init.d dir. ideally i would just have made sure that it only ran at level 6 or so but this works fine for now. [131090750010] |Where to apply xmodmap for systemwide usage? [131090750020] |I need to apply a custom xmodmap for all users at the start. [131090750030] |Where do I need to put it? [131090750040] |I have thought at /etc/rc.local? [131090750050] |Does this make sense? [131090760010] |I don't have experience with xmodmap, but you can always make a .xmodmaprc file and put it in /etc/skel. [131090760020] |The file will be copied to all new users' home, thus applying the settings. [131090770010] |/etc/rc.local won't work for this situation, because xmodmap requires an X server to talk to. [131090770020] |I know that /etc/X11/Xmodmap is part of the xorg-x11-xinit package on RHEL and Fedora, so make your changes there. [131090770030] |They will be used when any new X session starts. [131090780010] |Is there a glibc API that can find the default handing application for a MIME type on linux? [131090780020] |I want to find the default handling application in my C program. [131090780030] |Is there a C API with same functionality as xdg-mime query default mime-type on Linux? [131090790010] |I don't believe there's a C API for querying mime-types in the same way that xdg-mime works. xdg-mime is just a shell script that queries your desktop environment (Gnome, KDE, or other), and runs the appropriate command to get the MIME type from that DE's internal configuration. [131090790020] |You could replicate the behaviour of the shell script, or just call the shell script directly from C. The XDG Utils web page doesn't seem to show anything about a C API. [131090800010] |glibc doesn't know anything about MIME types; the API functions live at the level of desktop environment APIs, and the freedesktop.org recognize that harmonizing them is an impossible task so they only specify the shell-level interface. [131090800020] |You either use that via popen() or code for a particular desktop environment. [131090810010] |Benefiting of sched_autogroup_enabled on the desktop [131090810020] |I am running a 2.6.37 kernel with sched_autogroup_enabled set to 1. I am not certain that I am seeing the benefits of this patch since: [131090810030] |
  • I am launching my applications from the desktop;
  • [131090810040] |
  • applications launched from the desktop share the same tty;
  • [131090810050] |
  • applications with the same tty do not benefit from the mentioned kernel feature.
  • [131090810060] |How can I select some applications which should be on a different tty from the rest? [131090820010] |What tools allow me to present man pages as formatted HTML on a web server? [131090820020] |What tools exist to make man pages available on a system available via HTTP and link topics together such that references in the SEE ALSO sections and elsewhere will become hyperlinks in the HTML representation of the man pages? [131090820030] |I'm using Debian (6) and Ubuntu (10.04, 10.10) currently, so existing packages would be preferred, but I'll also go for any other solutions if they are clearly superior. [131090830010] |The debian package dwww give access to all the documentation installed by the packages, included the manual pages. [131090830020] |After installing the package with your favorite package manager, you will be able to browse the local documentation with your navigator on http://localhost/dwww/. [131090830030] |By default, access to this URL is restricted to local connections but you can change this restriction in the configuration file /etc/dwww/apache.conf (don't forget to reload apache after changing something in this file). [131090840010] |How to determine if two devices are connected in network? [131090840020] |Is there a free linux softwear to determine if two network devices are connected? [131090840030] |Or a php class can be also good... [131090840040] |I'd like to draw the network topology. [131090840050] |I can determine the devices in the network with zenmap (it saves the results to file), but I don't know which device connected to which device... (zenmap can also draw topology, but it doesn't save it to file) [131090850010] |traceroute may be a good start: it shows you the number of hops between your own host and a remote. [131090850020] |I don't know of a way to get this kind of info for a pair of remotes, except by running traceroute on one of them. [131090860010] |If your device list includes IP addresses and netmasks, you could create a basic layer 3 graph by creating a vertex for each subnet, a vertex for each device, and an edge between each (subnet, device) in your device list. [131090860020] |This will result in a pure layer-3 topology which probably isn't a bad start. [131090860030] |Also if your network is somewhat complex, this won't work too well. [131090860040] |For example, if you have duplicate or overlapping subnets (perhaps with NAT or MPLS VPNs), the assumption that all devices within a particular IP range are connected may not be true. [131090870010] |Cloud Server: Which MTA (exim/postfix/etc.) on What OS (Linux/FreeBSD) [131090870020] |Hello! [131090870030] |My company wants to migrate the current mail server into a Cloud Server Provider. [131090870040] |The Provider is the IaaS (Infrastructure as a Service) kind, not SaaS (Software as a Service). [131090870050] |That means I have to install the OS + MTA myself. [131090870060] |I'd really appreciate it if you can give me a guidance, pro/con analysis, experience, etc. on the following combinations: [131090870070] |
  • Exim on Linux
  • [131090870080] |
  • Postfix on Linux
  • [131090870090] |
  • Exim on FreeBSD
  • [131090870100] |
  • Postfix on FreeBSD
  • [131090870110] |
  • (other MTA)* on Linux/FreeBSD
  • [131090870120] |*Please do not suggest sendmail and/or qmail. [131090870130] |Thank you all for your kind assistance. [131090870140] |PS: When I've made my choice, I'll change the question title to '[Solved]' and post my choice. [131090880010] |I'd steer you away from Gentoo as a server OS, simply because it's not exactly known as a stable platform with rigorous testing. [131090880020] |If you want to use Linux, try one of the long-term support options from Ubuntu, Debian, CentOS, RHEL or SuSE. [131090880030] |FreeBSD has postfix and exim in ports, although it has sendmail as it's default MTA so you'll find most of the MTA documentation for FreeBSD focuses mostly on Sendmail, but that doesn't mean that it's impossible to use Exim or Postfix. [131090880040] |Also, be careful about your Cloud service provider. [131090880050] |I've heard nightmares of people setting up a server in the Cloud only to find the entire IP subnet blacklisted by popular DNS blacklists because the cloud provider also has customers who send spam. [131090890010] |my vote goes to debian stable and exim4 -- stable, well-documented, and lightweight (you can, of course, use postfix on debian, but exim4 is the default MTA). [131090890020] |Here's the canonical documentation, for reference :) [131090900010] |I like CentOS and the Courier suite for this. [131090900020] |Courier provides the whole shebang, inlcuding IMAP server (assuming you will have clients). [131090900030] |A a good set of tools. [131090900040] |Also combine it with Spamassassin. [131090910010] |I would suggest you to consider: [131090910020] |
  • Cyrus Murder, which enables to share the entire IMAP mailbox namespace, but distribute the load amongst a pool of servers, and
  • [131090910030] |
  • FreeBSD, which is a robust system used by many companies over the world, even by Micro$oft.
  • [131090910040] |I love the way FreeBSD ports mechanism greatly simplifies your everyday life, see Update your FreeBSD software with care. [131090910050] |It's a well-documented system. [131090910060] |I imagine Cyrus+FreeBSD based cluster, backed by ZFS (also, on FreeBSD.org wiki), the latter being actively ported/developed by FreeBSD project. [131090920010] |If you intend to make it a dedicated mail server, I absolutely suggest Zimbra Community Edition. [131090920020] |It is a complete mail server solution suite, including imap4, pop3, IM, webmail gui, document sharing, calendar, directory service etc. [131090920030] |It is based on postfix for mta. [131090920040] |It takes 5 minutes tops to install and has a very cool and complete web administration interface. [131090920050] |Good documentation and support through their wiki and forums sites. [131090930010] |Is it possible to run a Vim clientserver instance over SSH+tmux [131090930020] |I've been a vim junkie for a year or so now and I've got some great little tricks burnt in to my memory. [131090930030] |I do a lot of development (at the office) in a terminal on Ubuntu. [131090930040] |One of the things I value most is having tmux taking care of my sessions so that if Gnome, or anything else, decides to kirk out then I haven't lost a thing. [131090930050] |Additionally, I run a vim clientserver (C-b :neww 'vim --servername d') as one of my tmux windows and use it to receive any files I want to work with while I have my other tmux windows doing various jobs. [131090930060] |It means I can have one vim open with no confusion about what I'm editing. [131090930070] |When I'm working remotely (via SSH,) I connect to my tmux session and carry on but I have one problem: vim won't start a client-server instance. [131090930080] |I presume it is X related but I can't seem to find the difference between a local tmux and a tmux via ssh. [131090940010] |Yes, it uses X properties to communicate. [131090940020] |Try running your remote SSH session with the -X option to allow X11 forwarding. [131090940030] |You may also have to enable that feature on the server side. [131090940040] |You may also have to manually adjust the DISPLAY environment variable (to "localhost:10.0") since the existing session will already have your local one from when it started. [131090950010] |This really is more appropriate as a comment to @keith but I wanted to elaborate a little: [131090950020] |His answer solved it perfectly with a simple -X when connecting but I took it that one step further by adding to my .ssh/config file: [131090950030] |Additionally, I was tempted to investigate repeating this for a headless development server that's located off-site that I'm regularly working on. [131090950040] |
  • I installed the most basic X11 components with yum: yum group install 'X11 Desktop Environment'
  • [131090950050] |
  • Created an alias in my zsh aliases file to start X on demand (performance still matters on a dev machine!) alias initFakeDisplay=startx -- /usr/bin/Xvfb :2 -screen 0 1024x768x24 &
  • [131090950060] |Then in future, I can connect with X11 forwarding enabled and use the same tmux+vim technique to run a vim clientserver [131090960010] |Power management hook for running scripts on wake [131090960020] |I'm looking for a way to run an arbitrary script every time my laptop wakes up. [131090970010] |add a script to /etc/apm/event.d ? [131090980010] |put it in /etc/apm/resume.d/ to run on wake up [131090990010] |Use Synergy when connected through different routers [131090990020] |I use synergy to control my laptop from my desktop when I have my laptop docked at my workstation. [131090990030] |Currently, I need to keep my laptop wired to the same router that my desktop is connected to in order to use synergy. [131090990040] |However, I will also take my laptop and work from other parts of the house. [131090990050] |I'd like to use just wireless on the laptop, so I don't have to switch connections to get up and move around. [131090990060] |My desktop has a wired connection to a router. [131090990070] |When I'm using wireless on my laptop, I'm connecting through a different router. [131090990080] |Is there a way to set up synergy to connect from the wireless router 192.168.17.* to the wired network 192.168.250.*? [131090990090] |I don't know much about network set up or terminology, so if I'm not including any pertinent details, please ask. [131091000010] |RJ-45 <-> RS-232, can I substitute the RS-232 side and abuse my Ethernet port as COM port? [131091000020] |We have a PCIX board (with MIPS CPU) from some vendor and it has an RJ-45 jack on the "board-side", where the cable has an RS-232 plug on the other side. [131091000030] |The expected protocol is clearly the same as when using a nullmodem cable between two machines. [131091000040] |Now I am wondering whether there is some Linux or *nix flavor that allows me to substitute the cable for a standard patch cable with RJ-45 connectors on both ends (not crossed for obvious reasons)? [131091000050] |I read that someone suggested socat for some similar use-case, but it appears that the use case is so arcane that documentation is virtually non-existent on the topic. [131091000060] |Of course it's well possible that I simply used the wrong search terms so far. [131091000070] |Reasoning: it's almost hard to come by devices that have a COM port nowadays, but most have an ethernet port. [131091000080] |Also, the board is in a rather inaccessible location, so to connect to it we've been using mobile devices. [131091000090] |And then it's even harder to find machines with COM ports. [131091000100] |NB: I'm aware of RS-232 to USB devices, but would prefer a solution as pointed out as it seems more universal. [131091010010] |It's not clear exactly what you want. [131091010020] |If you want to use your existing Ethernet port, that won't be an option for many reasons; the most fundamental being that Ethernet requires precise termination and voltage levels, the hardware on the interface (the PHY) is made to deal with that. [131091010030] |Ethernet uses strictly +/- 0.85V and 50ohm termination impedance; RS-232 uses at a minimum +/- 3V, and could be as high as +/-25V, typically +/-12V. I imagine if you did try to connect your Ethernet port to an RS-232 line, it would fry your network interface. [131091010040] |Socat is a whole other level, and definitely is not useful here: it's a TCP/IP communication tool: it doesn't know anything about the electrical characteristics of the underlying hardware - it could talk over an RS232 line, but it'll be talking TCP, and you'd need to talk TCP on the other side for it to work. [131091010050] |Now, if what you're doing is designing a board, you could put an RJ45 jack with traces to a serial I/O port, which is exactly what the makers of your PCIX board have done. [131091010060] |I've also seen Cisco routers like this. [131091010070] |The tool you really need is an RS232->USB converter. [131091020010] |Many devices use nonstandard connectors for serial ports. [131091020020] |RJ-45 is probably the most common connector used for RS-232 serial after DB-9, but unlike with DB-9, there aren't even de facto standards for the pinout. [131091020030] |I'm aware of 4 different RJ-45 RS-232 pinouts, and there are probably others I haven't seen yet. [131091020040] |None of this means that people are somehow converting Ethernet to serial. [131091020050] |They merely happen to use the same connector. [131091020060] |There are many products that do provide that conversion, and in fact most of them do use the RJ-45 connector for their serial side. [131091020070] |For an example of a single-port converter, there's the Digi One SP. [131091020080] |More common are boxes that provide multiple serial ports, like the Digi PortServer and the Avocent (neé Cyclades) Console Servers. [131091020090] |These are just two examples out of many. [131091020100] |Digi and Avocent are easily the two biggest players, but there are lots of smaller companies doing things like this. [131091020110] |Some of these boxes present themselves to the OS as /dev/ttyWHATEVER by installing a driver. [131091020120] |These have the advantage that any program that knows how to talk to a serial port can talk to the remote device plugged into the converter. [131091020130] |For the most part, the driver makes the converter appear no different from a local serial port. [131091020140] |For example, if a program opens one of the converter's /dev/ nodes and calls cfsetospeed() on it to set the serial port's bit rate, the driver forwards the command to the remote converter box, which changes the serial bit rate on that port. [131091020150] |The main problem you run into with that type of converter is that it isn't always possible to find a working driver for your particular kernel. [131091020160] |This problem is becoming more common as the popularity of RS-232 drops, since it means the companies providing these boxes have dwindling incentives to keep enhancing their driver to track kernel differences. [131091020170] |The other major type of serial to Ethernet converter is purely a network appliance. [131091020180] |For example, with the Cyclades boxes, if it gets the IP 10.1.2.3 from the DHCP server, you can connect to 10.1.2.3 on TCP port 7001 to connect to the first serial port. [131091020190] |You'd use TCP port 7002 for the second serial port, and so forth. [131091020200] |To set serial port parameters with this sort of converter, you typically have to use a web management UI hosted by the converter box. [131091020210] |While this does mean you don't get features like automatic serial port parameter forwarding to the converter, you do get compatibility with any program that can open a TCP connection without needing a driver. [131091030010] |make a 2nd root? [131091030020] |I was thinking, maybe i'd like to disable passwords for the root user and only use keys to log in. and if i need to login without keys have a 2nd user that is like root in everyway except has a different name. [131091030030] |Is there a way to have a 2nd root user? [131091030040] |Is this impossible? is it a bad idea to disable passwords? [131091040010] |Instead of creating a "second root user", just give another user account privileges to use sudo. [131091040020] |That way if the root account is hosed, you can just do sudo bash or such to have root access to the system again. [131091040030] |Although it is better to just use sudo for individual commands... [131091040040] |Some distro's such as Ubuntu are actually configured this way out-of-the-box, as a security measure. [131091050010] |I've actually seen a system set up the way your describe. [131091050020] |It had two lines in /etc/passwd for user ID 0 (root): [131091050030] |Or something like that. [131091050040] |I think it was a SunOS 4.1.x system, a long time ago, so maybe you can't do this on a modern Linux system. [131091050050] |I'd say go ahead and give it a try. [131091050060] |What can it hurt? [131091060010] |Problems viewing some bit.ly images [131091060020] |When I visit this page, I get this ugly image: [131091060030] |I checked with 3 browsers and get the same results. [131091060040] |Some other people don't experience the problem. [131091060050] |What's happening? [131091070010] |Map "windows" key on keyboard to "ctrl" [131091070020] |I am on Ubuntu and using a Microsoft keyboard. [131091070030] |I want to map my Win key to a Ctrl key. [131091070040] |How can I do that? [131091080010] |xmodmap lets you modify keymaps. [131091080020] |Make a file to hold xmodmap commands (~/.xmodmaprc is a common choice). [131091080030] |The Win keys are called "Super" in xmodmap (Super_L and Super_R for the left and right ones). [131091080040] |By default they're connected to mod4, so you want to remove them from that modifier and add them to control. [131091080050] |Add this to the command file: [131091080060] |Tell xmodmap to load it with: [131091080070] |It will only last as long as your X session does, so you'll need to rerun it each time, or put it in something like ~/.xinitrc so it will be run automatically [131091090010] |Go into the keyboard settings, click "Options", expand "Alt/Win key behavior", and select "Control is mapped to Win keys". [131091090020] |(Command line version: setxkbmap -options altwin:ctrl_win, then edit /etc/X11/xorg.conf and add XkbOptions "altwin:ctrl_win" to the keyboard InputDevice section. [131091090030] |(If there is already an XkbOptions line, then add it to that line, separated by a comma: XkbOptions "grp:alt_shift_toggle,altwin:ctrl_win".) [131091100010] |I am getting an error trying to transfer a VirtualBox OSE VM to its partition [131091100020] |I ran these commands: [131091100030] |When I try to mount the partition, I get this complaint: [131091100040] |Now I know that the VM had an ext4 partition. [131091100050] |What did I do wrong? [131091110010] |Your image is a disk image, not a filesystem image. [131091110020] |The filesystem is on a partition inside that image (unless you did something really unusual). [131091110030] |You can confirm this by running file Debian.raw and fdisk -l Debian.raw. [131091110040] |The easiest way to access this partition is to associate it with a loop device. [131091110050] |If you can, make sure your loop driver supports and is loaded with the max_parts option; you may need to run rmmod loop; modprobe loop max_part=63. [131091110060] |Then associate the disk image with a loop device, and voilà: [131091110070] |If you can't get the loop driver to use partitions, you need to find out the offset of the partition in the disk image. [131091110080] |Run fdisk -lu Debian.raw to list the partitions and find out its starting sector S (a sector is 512 bytes). [131091110090] |Then tell losetup you want the loop device to start at this offset: [131091110100] |If you want to copy the partition from the VM image to your system, determine its starting ($S) and ending ($E) offsets with fdisk -lu as above. [131091110110] |Then copy just the partition: [131091110120] |(If the source and the destination are not on the same disk, don't bother with dd, just redirect tail's output to /dev/sda5. [131091110130] |If they are on the same disk, dd with a large bs parameter is a lot faster.) [131091120010] |Measuring the length of a curved line [131091120020] |Is there a software tool that will allow me to measure the length of a curved line? [131091120030] |I have a series of lines in an image that I want to measure the length of. [131091120040] |I have a tablet so I can trace over the lines in the image in order to identify the distance to be measured. [131091120050] |There are plenty of tools that do straight lines but sofar I can't find on that does free form curves. [131091130010] |You can use inkscape: select/draw a path and then: [131091130020] |or gimp, through the "measure active path" plugin: [131091140010] |I know I've seen "mouse pedometers" in the past that would do what you want. [131091140020] |The only example I can find right now is "kdetoys-mousepedometer", and then only in some RPM archives. [131091150010] |Netflix on Linux [131091150020] |I recently installed Fedora 14 on my home PC so I have a dual boot system running windows and linux. [131091150030] |I probably would primarily use Linux on that machine as its older and Linux manages its resources MUCH better than Windows does, BUT I'm a bit of a Netflix junky and from what I've read there isn't currently a solution that allows for Netflix to work on Linux. [131091150040] |Evidently Moonlight (which as I understand is supposed to be like silverlight) is missing a key piece of functionality. [131091150050] |So is there really no solution? [131091160010] |From what I understand, the only way you can reliably watch netflix is through a virtual machine running Windows. [131091160020] |At this point, playing natively in Linux is not supported. [131091170010] |Number of running processes show in top [131091170020] |The usual maximum number that I have seen in the "running" field displayed in top(1) is the number of logical CPUs installed in the system. [131091170030] |However, I have observed that under Ubuntu 10.04 (not checked in other versions), sometimes top(1) shows more processes running than the limit I've mentioned. [131091170040] |What can be causing the display of say, for example, 2 running processes in a single core system? [131091180010] |Hyperthreading, perhaps. [131091180020] |Note that top's man page says: [131091180030] |Tasks shown as running should be more properly thought of as 'ready to run' -- their task_struct is simply represented on the Linux run-queue. [131091180040] |Even without a true SMP machine, you may see numerous tasks in this state depending on top's delay interval and nice value. [131091190010] |The “running” field in top doesn't show the number of tasks that are simultaneously running, it shows the number of tasks that are runnable, that is, the number of tasks that are contending for CPU access. [131091190020] |If top could obtain all system information in a single time slice, the “running” field would be exactly the number of tasks whose status (S column) show R (again, R here is often said to mean “running”, but this really means “runnable” as above). [131091190030] |In practice, the number may not match because top obtains information for each task one by one and some of the runnable tasks may have fallen asleep or vice versa by the time it finishes. [131091190040] |(Some implementations of top may just count tasks with the status R to compute the “running” field; then the number will be exact.) [131091190050] |Note that there is always a runnable task when top gather its information, namely top itself. [131091190060] |If you see a single runnable task, it means no other process is contending for CPU time. [131091200010] |Power management hook for running *X11* scripts on wake [131091200020] |Earlier I asked a similar question, but that one was about running system-level scripts. [131091200030] |I have a script that runs a series of xinput commands in order to enable two-finger scrolling on my touchpad. [131091200040] |It has to be re-run every time the computer sleeps and wakes up again, and it has to be run from within the X11 session. [131091200050] |How do I trigger a script to be run on my X11 desktop when the computer wakes up? [131091200060] |P.S. [131091200070] |I suspect there's a way to do this with DBUS, but I' m fuzzy on the specifics. [131091200080] |Maybe someone with stronger DBUS-fu could point me in the right direction. [131091210010] |Try exporting the DISPLAY variable for existing X session. [131091210020] |Assuming it's :0, add the following to the start of your script (well, at least before you run any X-related commands). [131091210030] |I think you may also need to grant authorization to the local host, by running the following in one of your existing X terminals. [131091210040] |(I'm assuming it's you, and only you on this system, if not, this will give other local users access to your X display) [131091210050] |It's been a while since I've really played with X, so hopefully someone can come along and correct me. [131091220010] |If you have gnome-control-center installed, run gnome-mouse-properties and visit Touchpad tab and click on Two-finger scrolling: [131091220020] |It works very well, and across wake-ups too. [131091220030] |So, unless you are using your scripts for something else too, throw them away :) [131091230010] |Persist clipboard contents in vi [131091230020] |If I want to copy text from a file in vi to another file, I have to highlight the text, Control-Shift-C it, quit the first file, open the second, and then paste it via Control-Shift-V. [131091230030] |It feels like there must be an easier way to do this - that is, keyboard commands only. [131091230040] |Any suggestions? [131091240010] |Sure: [131091240020] |
  • Open your file: vi foo
  • [131091240030] |
  • In your file, open the second: ESC :open bar
  • [131091240040] |
  • Return to the first: ESC :prev
  • [131091240050] |
  • To copy the content of the file: ESC :1,$ y
  • [131091240060] |
  • To go the the next file: ESC :n
  • [131091240070] |
  • Paste it: ESC p
  • [131091240080] |Maybe there's a shorter way? [131091250010] |While editing a file you can use :e filename to open another file and :rew to return to the original file like this: [131091250020] |
  • open original file vi foo
  • [131091250030] |
  • yank text e.g. yy (yank a line)
  • [131091250040] |
  • open a second file to edit ESC :e bar
  • [131091250050] |
  • put your text p (then save :w)
  • [131091250060] |
  • go back to the first file ESC :rew
  • [131091260010] |vim can be set up with X clipboard support. [131091260020] |Pass --with-x=yes to configure if your distribution does not bundle vim with X support. [131091260030] |By default the X clipboard is available as the * buffer and the X cut board (selection) is available as the " buffer, so selecting a region and typing "+y will yank the selection to the X cut board, and "*y will yank to the X clipboard. [131091260040] |Similarly, "*p will paste from the X clipboard and "+p from the X selection. [131091260050] |You can set the default (anonymous) buffer in vim to be the X clipboard by running set clipboard=unnamed. [131091260060] |Then you can just y and p from the X clipboard directly. [131091270010] |Is there a one-liner that allows me to create a directory and move into it at the same time? [131091270020] |I find myself repeating a lot of... [131091270030] |mkdir longtitleproject [131091270040] |cd longtitleproject [131091270050] |Is there a way of doing it in one line without repeating the directory name? [131091270060] |I'm on bash here. [131091280010] |In your ~/.bashrc: [131091280020] |Then run mkcd longtitleproject. [131091280030] |You might want to omit -p, use pushd instead of cd, give the function a different name, or other variants. [131091280040] |There are also less specialized ways to not have to retype the word from the previous line: [131091280050] |
  • Type cd , then Esc . (or Alt+.) to insert the last argument from the previous command.
  • [131091280060] |
  • cd !$ executes cd on the last argument of the previous command.
  • [131091280070] |
  • Press Up to recall the previous command line, then edit it to change mkdir into cd.
  • [131091290010] |As per What customizations have you done on your shell profile to increase productivity?, this is how I do it: [131091290020] |it means it also works if the directory already exists. [131091300010] |Or you could just create a short variable on-the-fly and use it twice x = longproject ; mkdir $x ; cd $x - which I admit is still longer than using a shellscript function :) [131091310010] |Would never have occurred to me to script up this behaviour because I enter the following on a near daily basis ... [131091310020] |where bash kindly substitutes !$ with the last word of the last line; i.e. the long directory name that you entered. [131091310030] |In addition, filename completion is your friend in such situations. [131091310040] |If your new directory was the only file in the folder a quick double TAB would give you the new directory without re-entering it. [131091310050] |Although it's cool that bash allows you to script up such common tasks as the other answers suggest I think it is better to learn the command line editing features that bash has to offer so that when you are working on another machine you are not missing the syntactic sugar that your custom scripts provide. [131091320010] |How do I find text within a file and have it search multiple subfolders? [131091320020] |I'm looking for a function name and the folder structure is deep and there are a lot of files to look though. [131091320030] |Usually I go with something like "find * | grep functionname" but is that the best way? [131091330010] |and in zsh with setopt extendedglob, [131091340010] |$ find -type f -print0 | xargs -r0 grep foo [131091340020] |-r in xargs avoids executing the command if there wasn't input. [131091340030] |It's a GNU extension. [131091350010] |There's also ack, which is designed specifically for this kind of tasks and does subfolder search automatically. [131091360010] |what's wrong with grep -r (== grep --recursive)? [131091360020] |Am I missing something here? [131091360030] |(+1 for ack too -- I regularly use both) [131091360040] |edit: I found an excellent article detailing the possibilities and pitfalls if you don't have GNU grep here. [131091360050] |But, seriously, if you don't have GNU grep available, getting ack is even more highly recommended. [131091370010] |As an alternative to the find | xargs responses, you might consider using ctags since you say you are searching not for text, but specifically for function names. [131091370020] |To do this you would run ctags against your source to create a TAGS file, and then run your grep against the TAGS file which will spit out lines in the following format: [131091370030] |Where tagname will contain the function name, tagfile is the file it is in, and tagaddress will be a vi command to get to that line. [131091370040] |(Could be a just a line number.) [131091370050] |(Is there an easy way to do something similar with the various indices that eclipse builds, or to just query the eclipse database?) [131091380010] |find . | xargs grep will fail on filenames with spaces: [131091380020] |Note that even -print0 has this problem. [131091380030] |It's better in my opinion to use -exec grep with find which will handle all filenames internally and avoid this problem: [131091390010] |If your disks are fast you may want to parallelize the grep: [131091390020] |Watch the intro video to learn more about GNU Parallel: http://www.youtube.com/watch?v=OpaiGYxkSuQ [131091400010] |Tmux viewport caused by multiple concurrent sessions. [131091400020] |When attaching to the same tmux sessions from multiple computers using ssh and tmux attach, my screen looks like: [131091400030] |I was wondering if there is a command to get rid of the viewport [131091410010] |Probably the width/height (colums/rows) of the "original" terminal form which you launched te tmux session is lower than that of the terminal you're attacching from. [131091410020] |Personally I don't use tmux, but that happens to me with screen when I launch from a 80x25 terminal and then I attach from another terminal with 80x50 columns/rows. [131091420010] |Zero-fill numbers to 2 digits with sed [131091420020] |Input: [131091420030] |Desired output: [131091420040] |How can I add a 0 if there is only a single digit, e.g. 1 in the "day" part? [131091420050] |I need this date format: YYYYMM DD. [131091440010] |Another solution: awk '{$2 = sprintf("%02d", $2); print}' [131091450010] |Here is a (non-sed) way to use bash with extended regex.. [131091450020] |This method, allows scope to do more complex processing of individual lines. (ie. more than just regex substitutions) [131091450030] |output: [131091460010] |What program do I use to check mail? [131091460020] |I'm receiving "You have mail" messages and according to How to remove “You have mail” welcome message I should read my mail with mail. [131091460030] |However I cannot find the command in my system (Ubuntu 10.04 server). [131091460040] |What do I need to install? [131091470010] |Just install mailutils which must contain mail: [131091470020] |Read more about mail and GNU mailutils here [131091480010] |Another program you can use is mutt. [131091480020] |I prefer using mutt to mail - it just has a nicer interface in my opinion. [131091480030] |should work - but I use Fedora not Ubuntu so can't confirm this. [131091490010] |You may already have mail installed. [131091490020] |If so, you can read your mail by entering mail at the command line. [131091490030] |Welcome to the world of choice. [131091490040] |You can use pretty well any mail reader you choose. emacs users can read mail from within their editor. [131091490050] |Install a pop3 or imap server and you can read your mail from your Windows PC, Mac, or other devices. [131091490060] |If you setup a .forward or .procmailrc file then you may be able to forward your mail to another e-mail address and read it from there. [131091500010] |On Debian and derived distributions, you can use the apt-file command to search for a package containing a file. [131091500020] |Install apt-file (apt-get install apt-file) and download its database (apt-file update, Ubuntu does it automatically if you're online). [131091500030] |Then search for bin/mail: [131091500040] |With the command-not-found package installed, if you type a command that doesn't exist but can be installed from the Ubuntu repositories, you get an informative message: [131091500050] |If you're not after mail specifically, but after any program to read your local mail from the command line, there are much better alternatives. [131091500060] |All mail user agents provide the mail-reader virtual package, so browse the list of packages that provide mail-reader and install one or more that looks good to you (and doesn't use a GUI, if it's for a server). [131091500070] |mutt 's motto is “All mail clients suck. [131091500080] |This one just sucks less.”, and I tend to agree, but in the end it's a very personal choice. [131091510010] |Centos or Scientific Linux [131091510020] |When I need to create a new server I've always chosen Centos mainly for it's compatibility with Red Hat which i consider the standard de-facto for the general purpose linux server. [131091510030] |Now the problem is that Red Hat 6 has been out for quite a while and there is no sign of the Centos 6 (event Centos 5.6 iso is still missing). [131091510040] |As in the need to create a new server what will you do? [131091510050] |Stay whit the old Centos 5.5 or switch to the recently released Scientific Linux 6.0? [131091510060] |I looked on SL 6.0 website and they declare great attention to compatibility with RH, I've never tried it by myself so I just wanted someone's real life opinion. [131091520010] |It all depends if you need the newer functionality, if not I would stick with Centos. [131091520020] |Bear in mind the Red Hat Support lifecycle when making your decision. [131091520030] |You could also consider Oracle Linux [131091530010] |Both projects, of course, are binary-compatible rebuilds from the source provided by Red Hat. [131091530020] |The primary differences are in the development/build model. [131091530030] |CentOS only makes changes to remove Red Hat branding, or very occasionally as a last measure to get something to build. [131091530040] |They aim to be bug-for-bug compatible with Red Hat Enterprise Linux. [131091530050] |Scientific Linux makes more customizations and additions, for example building OpenAFS packages. [131091530060] |(They do keep the SRPMs for these separate, though.) [131091530070] |CentOS is a "community" distribution, but it's really built and maintained by a small (but active) group of volunteer developers in a closed manner. [131091530080] |The lack of communication from this group is sometimes frustrating and I think a problem they need to solve. [131091530090] |Scientific Linux is much more open in its development model, and it's a lot easier to see what's going on. [131091530100] |In my impression, Scientific Linux is a lot more likely to accept "get it working" hacks, and CentOS a bit more careful, even if it means delay — as in the case of the CentOS 6 release. [131091530110] |Both are in production use at a lot of serious institutions. [131091530120] |I don't think the delay of the CentOS 6 release is a huge cause for concern — it's just the way things are. [131091530130] |(I think prioritizing 5.6 was the right move, since that's blocking some security updates. [131091530140] |The fact that that's not done yet is more worrying.) [131091530150] |Hopefully after the actual work of getting the release out the door is done there can be some evaluation and maybe some changes made in communication of status. [131091530160] |But I also think you won't go wrong deciding to use Scientific Linux now. [131091530170] |They're basically interchangeable, after all, and it shouldn't be terribly painful to switch if you change your mind later. [131091530180] |If your system is on the front lines and is mission-critical for your business, buying a Red Hat Enterprise Linux subscription should also be on your list of options to consider. [131091530190] |It's pricey, but not that pricey all considered, and by their nature CentOS and Scientific Linux always have some latency in getting security updates out the door. [131091540010] |Oracle Linux has the same close update repositories as Red Hat. [131091540020] |I have two kind of servers, mission critical and not. [131091540030] |I usually use the least critical servers to try out the new features to be ready when I need to create a new mission critical machine (usually with a RH license), this is the main reason for wich I usually try out the latest RH clone available. [131091540040] |They usually aren't on the frontline. [131091550010] |Script for opening a file in an already existing emacs process [131091550020] |I keep emacs open all the time I'm working. [131091550030] |I have several script that will launch all the apps and files related to a specific project. [131091550040] |But, it will launch emacs again, rather than simply cause the current emacs to open the requested files. [131091550050] |I'd rather the current emacs simply opened the project text files in a new buffer. [131091550060] |Any ideas how I can do that? [131091550070] |Gracias [131091560010] |M-x server-start inside the Emacs session, then use emacsclient -n file1 file2 ... to add files to the existing Emacs. [131091560020] |There are additional options you might want to use, e.g. -c to open the files in a new window (frame). [131091570010] |Put (server-start) in your .emacs file. [131091570020] |Add this to ~/.bashrc [131091570030] |then use myedit as your editor. [131091570040] |You will have to use the -c option to bring up a window. [131091570050] |So you may do this: [131091570060] |or [131091580010] |xmessage over ssh [131091580020] |The following command prints a message over ssh : xmessage Message -display :0 &How does it work? there is no -display option in xmessage's man page. [131091590010] |It's included by (obscure) reference. [131091590020] |SEE ALSO [131091590030] |X(7), echo(1), cat(1) [131091590040] |And buried down a ways in X(7): [131091590050] |OPTIONS [131091590060] |Most X programs attempt to use the same names for command line options and arguments. [131091590070] |All applications written with the X Toolkit Intrinsics automatically accept the following options: [131091590080] |
    -display display
    [131091590090] |
    This option specifies the name of the X server to use.
    [131091590100] |followed by a number of other X Toolkit Intrinsics (Xt) standard options. [131091590110] |More modern toolkits have similar common options, which you can see with the --help-all option. [131091600010] |How can I restore my linux? [131091600020] |I had an hard-drive with two partitions: one was 460GB NTFS and the other was 5GB ext3 Ubuntu 10.10. [131091600030] |I wanted to extend the Ubuntu partition, so I was going to shrink the NTFS partition by 15GB, but I accidentally right-clicked the NTFS partition and chose "Make Partition Active". [131091600040] |It actually made all the ext3 partition to become "Unallocated". [131091600050] |It seems I can't boot from it anymore. [131091600060] |My question is, how can I undone it? [131091600070] |Because it took like a millisecond to complete, I'm almost sure the data is still there. [131091600080] |Thanks. [131091610010] |The program calling the Linux partition "unallocated" sounds like the Windows Disk Management tool. [131091610020] |Microsoft could make it recognize non-Microsoft partition types, but they haven't. [131091610030] |It may be that your Ubuntu partition is still there and unharmed. [131091610040] |If that is the case, you may just have to mark the Ubuntu /boot partition active. [131091610050] |The Windows tool will probably refuse to mark any non-Microsoft partition active, so you'll have to use another tool. [131091610060] |I recommend booting your system with the Ubuntu install disk and telling it to use rescue mode. [131091610070] |I haven't used the Ubuntu rescue mode recently; it may have a menu option for fixing this sort of thing automatically. [131091610080] |If not, you will have to get to a command prompt, then say something like this: [131091610090] |That sets /dev/sda1 to be active. [131091610100] |That's the most likely one to be /boot, but isn't necessarily it. [131091610110] |You can try rebooting now. [131091610120] |If that didn't work, try repairing your GRUB boot loader. [131091610130] |If that also fails, go back into rescue mode, get into fdisk and look at the partition table again. [131091610140] |If any look to be marked as something other than either NTFS, Linux, or Linux swap, and the odd one out is 5 GB, you may have found the "unallocated" partition. [131091610150] |Say it's /dev/sda3. [131091610160] |Then in fdisk: [131091610170] |That sets /dev/sda3 to partition type 83, which says it contains an ext2 filesystem, or one of its successors. [131091610180] |Again, try booting. [131091610190] |If that's still not doing it, there are other steps you can take, but we've run out of easy ones. [131091610200] |It sounds like this was just a hobby install, so it's probably not worth going to heroic measures to fix it. [131091610210] |If it comes to reinstalling, consider using Wubi this time around instead of installing Ubuntu in a separate partition. [131091610220] |Wubi lets you create a virtual disk image inside your Windows partition, which is easier to manage and has less risk of a fight with Windows. [131091620010] |ungrabbing keys [131091620020] |I have somehow managed to get my keyboard only to work and be able to select with the mouse cursor, only when the Shift and Control keys are held. [131091620030] |How can I undo this? [131091620040] |I have tried System -> Preferences -> Keyboard but nothing appears there to resolve this. [131091620050] |Edit: [131091620060] |I can only enter text or select anything (even opening a new tab in web browser), only if the shift and control keys are held. once the selection is "active" then I can type normally, until the next time I need to type, etc.. [131091620070] |Below as recommended: [131091620080] |Edit: [131091620090] |Pressing (and holding) the Alt or Windows key has a similar action as to holding the Shift and Control keys (Shift and Control are held simultaneously). [131091630010] |It looks like your system believes that you're holding down the mod2 modifier, so try pressing and releasing the corresponding key. [131091630020] |Modifiers other than Shift and Ctrl don't have standard assignments, so I can't tell you what the corresponding key is. xmodmap -pm would tell you, but try pressing and releasing the Alt key or the Windows key. [131091630030] |(I'm basing this on state 0x11, which I think means Shift + mod2, but I'm not completely sure that's what it means. [131091630040] |But the symptoms do look like a missed key release event.) [131091630050] |Update: The output from xmodmap -pm shows that mod2 is Num Lock, so try toggling Num Lock off. [131091640010] |Does SELinux provide enough extra security to be worth the hassle of learning/setting it up? [131091640020] |I recently installed Fedora 14 on my home pc and have been working on setting up different server related features such as apache, mysql, ftp, vpn, ssh, etc. [131091640030] |I ran extremely quickly in to a barrier it felt like when I discovered SELinux which I had not priorly heard of. [131091640040] |After doing some research it seemed as though most people were of the opinion that you should just disable it and not deal with the hassle. [131091640050] |Personally if it really does add more security I'm not opposed to dealing with the headaches of learning how to appropriately set it up. [131091640060] |Eventually I plan on opening my network up so that this pc can be access remotely but I don't want to do that until such time as I'm confident that its secure (more or less... lol). [131091640070] |If you have set it up and gotten it functioning correctly do you feel that it was worth the time and hassle? [131091640080] |Is it really more secure? [131091640090] |If you have opted out of using it was that decision founded on any research worth considering in my situation as well? [131091650010] |SELinux enhanced local security by improving the isolation between processes and providing more fine-grained security policies. [131091650020] |For multi-user machines, this can be useful because of the more flexible policies, and it raises more barriers between users so it adds protection against malicious local users. [131091650030] |For servers, SELinux can reduce the impact of a security vulnerability in a server. [131091650040] |Where the attacker might be able to gain local user or root privileges, SELinux might only allow him to disable one particular service. [131091650050] |For typical home use, where you'll be the only user and you'll want to be able to everything remotely once authenticated, you won't gain any security from SELinux. [131091660010] |Memory management principle used by Linux [131091660020] |Is memory management in the Linux Kernel done using paging or segmentation or both? [131091670010] |Memory management under Linux works exclusively by paging. [131091670020] |Linux supports both "small" (usually 4kB) and "large" (2M) pages. [131091670030] |The former "just work" without you knowing or doing anything special, and the latter need special treatment (have to be reserved at boot time and mapped via a special device). [131091670040] |The only context in which "segment" appears under Linux is the brk() syscall which is used to increase the data segment of a process (you will not normally need this, malloc does it without you having to care). [131091670050] |But either way, the memory comes in pages and is managed in pages, and it has nothing to do with x86 segmentation. [131091680010] |Linux uses both. [131091680020] |It uses segmentation to map all the available address space while giving different access rights: basically a kernel and a user space view. [131091680030] |You can grep the Linux source code for KERNEL_DS (kernel data segment) for some examples. [131091680040] |Paging is then used for implementing virtual memory (Grep for "struct gdt_page" in the kernel, for a starting point) [131091690010] |vim: use string from search in replace command [131091690020] |I often use vim / search command to verify my regular expressions (just to see what it matches). [131091690030] |After that i usually use the :%s replace command, where i use that regexp from search as a string to be replaced, e.g. i first look for such string: [131091690040] |It matches exactly what i want, so i do my replace: [131091690050] |But i have to write again entire regexp here. [131091690060] |Usually that regexp is much longer, that's why I'm looking for solution: [131091690070] |Is there any existing shortcut or vim script for pasting that search pattern directly into replace command? [131091690080] |P.S. [131091690090] |I use vim in terminal (no gvim). [131091700010] |In general, an empty regular expression means to use the previously entered regular expression, so :%s//\1/g should do what you want. [131091710010] |Good book for networks [131091710020] |I need multilingual book (With Polish translation especially) about anything concerned with internet. [131091710030] |From OSI model to newest technologies. [131091710040] |Do you know any like this? [131091720010] |Which version of openSUSE is closest to SLES11SP1 [131091720020] |Which version of openSUSE is closest to SLES11SP1 in terms of package versions? [131091730010] |I would say it's probably closest to OpenSUSE 11.2, but it's not an exact match. [131091730020] |Method: compare the package versions of SLES to OpenSUSE using DistroWatch [131091740010] |How to use the EXTRA_FIRMWARE_DIR kernel option? [131091740020] |I am using Gentoo, and I need to load an extra firmware to get my USB Wifi adapter work. [131091740030] |I found an EXTRA_FIRMWARE_DIR kernel option, but I do not understand if it is used during compile time only or if it is effective after the new kernel is used. [131091740040] |My WiFi adapter chip is Atheros, and according to this page, I have to put the firmware to the right place. [131091740050] |On Ubuntu, I found the /lib/firmware directory as it is indicated in that page, but I cannot find that directory on Gentoo. [131091750010] |Take a look at this: http://www.kernel.org/doc/menuconfig/drivers-base-Kconfig.html [131091750020] |In particular: [131091750030] |
  • EXTRA_FIRMWARE "allows firmware to be built into the kernel, for the cases where the user either cannot or doesn't want to provide it from userspace at runtime"
  • [131091750040] |
  • EXTRA_FIRMWARE_DIR "controls the directory in which the kernel build system looks for the firmware files listed in the EXTRA_FIRMWARE option. [131091750050] |The default is the firmware/ directory in the kernel source tree, but by changing this option you can point it elsewhere, such as the /lib/firmware/ directory or another separate directory containing firmware files".
  • [131091750060] |By the way, as far as getting your wireless card working, have you taken a look at these pages?: [131091750070] |
  • http://en.gentoo-wiki.com/wiki/TL-WN821N
  • [131091750080] |
  • http://bugs.gentoo.org/278385
  • [131091760010] |Font issue, chrome on ubuntu [131091760020] |The chrome font on Ubuntu makes it hard to read code. [131091760030] |How can I change it? [131091770010] |Click the wrench icon. [131091770020] |Select "Preferences" [131091770030] |Select "Under the Hood" [131091770040] |Under "Web Content" you can "Customize Fonts..." -- you'll want to changed the "Fixed-width font." [131091770050] |For some reason, web browser like to make the monospace font smaller than other text, which can make code harder to read. [131091770060] |Also handy: hold the control key and hit - or + to decrease / increase font size. [131091780010] |Copy a directory to external HDD [131091780020] |Hi! [131091780030] |I'm trying to copy a directory to external HDD- i mounted the device and then typed : cd root tar -cf - * | (cd /mnt ; tar -xpf -) [131091780040] |I got this error message: "cowardly refusing to create an empty archive" [131091780050] |When I do ls to the same root directory- it is not empty at all- all my needed files are there. [131091780060] |Why does this happen? [131091790010] |Why don't you simply use cp -pr source destination? [131091790020] |Anyway: [131091790030] |works just fine. [131091800010] |I find the best thing for copying whole directory structures is rsync [131091800020] |This also has the advantage that you can to it to or from a remote directory through ssh. [131091810010] |if you want to copy the root filesystem and worry about special files and devices, the best way is: [131091810020] |
  • first to mount the / in a subdir using the bind option for mount, this way you won't have to worry about /proc, /dev, /sys and other mounted filesystems
  • [131091810030] |
  • then use a command that knows how to handle special files, like cp -a or rsync -a
  • [131091810040] |lets say you have mounted the external drive under /mnt/external [131091810050] |or [131091810060] |or (if you like tar so much, btw the worst option) [131091810070] |add -v to any of the above for verbose output, but it will slow down a bit the process [131091820010] |Please use the POSIX PAX utility. [131091820020] |Unlike cp, pax works the same on every system. [131091820030] |With pax you can always safely copy file systems with special files such as device nodes. [131091820040] |To copy one file system /mnt/foo into /mnt/bar, preserving permissions, timestamps and special files do: [131091830010] |Support of USN Journal (change journal) in NTFS-3G driver [131091830020] |Does any one know if ntfs-3g driver implements change journal? [131091830030] |I checked official website but couldn't find any information re. [131091830040] |USN. [131091840010] |I'm not finding much documentation for USN + ntfs-3g, but looking through the ntfs-3g sources, in include/ntfs-3g/layout.h, I found the following: [131091840020] |(See also: struct STANDARD_INFORMATION's usn field) [131091840030] |So apparently they are using USNs, but I don't know the proper way to get at them. [131091840040] |I'd start by looking at how NTFS_RECORD is used, and try to work your way out the the API from there. [131091850010] |How were the weightings in the linux load computation chosen? [131091850020] |Hello, [131091850030] |In linux, the load average is said to be on 1min/5min/15min. [131091850040] |The formula used by the kernel is actually an Exponential moving average. [131091850050] |If we define cpuload(1) as the first computation of the cpu load 1min, and active() as the function returning the number of process in state "running" or "runnable" on the system, then the formula used by the kernel to compute the nth cpu load 1min is: cpuload(0) is 0, it is the value stored in memory before the first execution of cpuload(). [131091850060] |I wonder how the weighting 2-5.log2(e)/60 have been chosen. [131091850070] |In my opinion 2-5/60 would have been better because 1min would have been the half-life of the number of process (because (2-5/60)12 = 1/2). [131091850080] |Maybe it's helpful if i post the explicit formula of cpuload(n) in addition to the recursive definition above (right-click to see it in full size): [131091860010] |Find a file in the path without "which"? [131091860020] |I am (somehow) able to run a script: [131091860030] |But which can't find it: [131091860040] |
  • How is this possible?
  • [131091860050] |
  • How can I find where this file is?
  • [131091860060] |I'm using bash. [131091870010] |You may be using bash, but the syntax of the which output shows that you use the old which written in csh. [131091870020] |The PATH shows up quoted by parentheses, and the directories in PATH have entries like /opt/SUNWspro/bin and /usr/ccs/bin which only make sense in Solaris. [131091870030] |That's consistent: Solaris used the csh which. [131091870040] |Here's my guess: you've got one PATH for bash, and another for csh. [131091870050] |This might be a system problem. [131091870060] |As I recall, Solaris keeps /etc/profile and /etc/cshrc files for system-wide PATH setting. [131091870070] |Those two initialization files might set different PATH variables for different shells. [131091870080] |Do "echo $PATH" under bash, and see if it agrees with what the which command prints out as a PATH string. [131091880010] |For bash use type -a assemble.sh [131091890010] |You can use locate assemble.sh to find the location of the file. [131091900010] |Or split the path, and use it in find - the first match should be the solution: [131091900020] |'type' is of course more easy. [131091910010] |Remove green from Linux Mint browser [131091910020] |When I use Firefox in Linux Mint, everything has a kind of green shade to it. [131091910030] |The monitor is fine. [131091910040] |I'm guessing it's Mint branding. [131091910050] |How do I remove it? [131091910060] |Edit [131091910070] |My original question wasn't clear - it's not the browser chrome that has the green shade, it's the pages themselves. [131091910080] |Here's a screenshot - compare with the actual page, and you'll see what I mean. [131091910090] |To prove it's not the monitor, I right click on the photo of the sausages (I bet that's the first time that word's appeared on unix.se.com!) and save the image to the desktop. [131091910100] |Then I view the image by double clicking on the new file and it displays fine. [131091910110] |http://imgur.com/qiX6H.png [131091910120] |There are two plugins installed - Mint Search Enhancer and Stylish - deactivating these makes no difference. [131091920010] |It's probably a custom theme. [131091920020] |Click Tools >Add-ons >Themes and select a different theme. [131091930010] |Creating an extended partition [131091930020] |I'm trying to create an extended partition. [131091930030] |In GParted, I shrunk the size of the existing partition and now want to create a new EXTENDED partition in the free, unallocated space. [131091930040] |GParted only lets me create a PRIMARY partition. [131091930050] |What am I doing wrong here? [131091930060] |Here's what I've got right now: http://i.imgur.com/WkEIS.png [131091930070] |You can actually ignore the flag for the swap as "boot." [131091930080] |That was me just messing around trying to get it to work. [131091930090] |I've removed that flag. [131091930100] |Not sure how the question of boot affects all of this...maybe it factors in somehow. [131091940010] |You already have an extended partition. [131091940020] |Unless you go through hoops, you can only have a single extended partition, but it can contain as many logical partitions as you want. [131091940030] |Filesystems live on primary partitions (of which you can have at most 3, or 4 if you don't have an extended partittion) or logical partitions. [131091940040] |The extended partition is only a container for logical partitions. [131091940050] |Resize the extended partition sdc2 to occupy more space, and create a new logical partition sdc6 (and more if desired) inside the extended partition. [131091950010] |How can I install KateSql plugin on ubuntu ? [131091950020] |Hello guys. [131091950030] |I am slightly new to Ubuntu and I just installed kate by typing sudo apt-get install kate. [131091950040] |Now I want to install this kate sql plugin and google is not helping me. [131091950050] |I downloaded a punch of files from here but what should I do with these files ? [131091950060] |Where should I put them ? [131091950070] |Would you please tell me how can I install this ? [131091950080] |Thanks [131091960010] |Autoconfig/Automake fails to generate AM_CFLAGS & AM_LDFLAGS for dependent D-BUS library. Why? [131091960020] |I want to build a program that use DBUS, using automake/autoconfig tools. [131091960030] |But the make command always report an error "dbus/dbus-glib.h": No such file or directory. [131091960040] |My OS is ubuntu 10.10. [131091960050] |And I installed both "dbus-1" and "dbus-glib-1". [131091960060] |I check the generated Makefile and found both AM_CFLAGS and AM_LDFLAGS are empty. [131091960070] |Could somebody help? [131091960080] |Many thanks! [131091960090] |Here is my code: [131091960100] |configure.ac: [131091960110] |Makefile.am: [131091960120] |my-app.c [131091970010] |Did you run aclocal to bring in all the relevant definitions? [131091980010] |I found the root cause. [131091980020] |In configure.ac, I should have added DBUS C/LD flags before I call AC_CONFIG_FILES([Makefile]) and AC_OUTPUT. [131091980030] |Then the AM_CFLAGS and AM_LDFLAGS in Makefile can get valid value. [131091990010] |Why does my sudo ask for password only once but evaluate thrice? [131091990020] |This is what my shell prints when I enter a wrong password to a sudoed command. [131091990030] |Why could this happen? [131092000010] |this may be stupid, but is your keyboard working properly? you can be triple pressing the enter key. [131092000020] |something similar has happened to me :) [131092010010] |I was experimenting with the RSA SecurID modules in my PAM configuration a while back, and successfully created exactly this behavior for myself, so I know one way to replicate what you're seeing. [131092010020] |If you have a pam module that fails (returns PAM_AUTH_ERR) as either the only configured required module or as requisite before anything else (or in a number of other possible configurations with similar effect), it will instantly return failure to sudo, which will then try again, twice, getting three failures in quick succession. [131092010030] |(You can configure passwd_tries in /etc/sudoers to a value other than 3 in order to get more or less failures, if for some reason you prefer.) [131092010040] |This doesn't prompt for your password once first, but there's definitely some PAM configurations which could do that, locking you out after the first failure and then returning failures quickly for the next tries. [131092010050] |So, I'm going to go ahead and guess that you've either messed up your PAM configuration, or else something pointed to by that configuration is failing (either correctly or not) in a way that doesn't introduce a delay. [131092010060] |(The "normal" delay is usually actually introduced by the pam_unix.so module, unless you give it the nodelay argument.) [131092010070] |One easy way to recreate this is to put [131092010080] |right above any existing auth lines in /etc/pam.d/sudo. [131092010090] |Again, that's insta-failure, not prompt-once-and-then-fail, but this should put you on the track for your specific configuration. [131092010100] |(As I understand it, your setup works fine if you give the right password, so I'd look into the on-failure behaviors of your configured PAM modules.) [131092020010] |Try sudo -K to remove the timestamp. [131092020020] |Also have a look at the timestamps directory (/var/run/sudo on debian systems), maybe something went wrong there. [131092030010] |Can't pipe into diff? [131092030020] |I wanted to be clever and compare a remote file to a local file without downloading it. [131092030030] |I can get the contents of the remote file by [131092030040] |However, piping that to diff [131092030050] |gives me this: [131092030060] |I have ssh keys set up, so it's not prompting me for a password. [131092030070] |What's a workaround for this? [131092040010] |Piping into diff is equivalent to running [131092040020] |diff path/file.name [131092040030] |and then, once it's running, typing the entire contents of the file. [131092040040] |As you can see, that's not what you expected. [131092050010] |Try to use - to represent the standard input. [131092050020] |ssh user@remote-host "cat path/file.name" | diff path/file.name - [131092060010] |Here's one workaround: diff seems to accept <(expr) as arguemnts: [131092080010] |How to work around missing 'last-modified' headers? [131092080020] |I'm running wget like this: [131092080030] |I get a bunch of these messages: [131092080040] |I suppose that means that pages keep getting re-downloaded, even though I have them locally. [131092080050] |NOTE: I want this so that I don't have to re-download existing files each time I run the command mirror. [131092090010] |That means that the web server does not provide last modification info. [131092090020] |Many servers hide that info for static content to manipulate the browser's cache. [131092090030] |You have instructed wget to ask for that info with --timestamping flag (which is redundant, it is implicitly enabled with --mirror). [131092090040] |If you don't want wget to re-download the same files on one run, try this (untested) [131092090050] |It isn't a good way to update an already existing mirror though (it won't re-download the same files even if they're changed), but AFAIK, there is no other workaround for wget. [131092090060] |edit: removed -N that accidentally left in the command line [131092100010] |Did you try adding the -c parameter? [131092100020] |Excerpt from wget manual: [131092100030] |-c --continue [131092100040] |Beginning with Wget 1.7, if you use -c on a non-empty file, and it turns out that the server does not support continued downloading, Wget will refuse to start the download from scratch, which would effectively ruin existing contents. [131092100050] |If you really want the download to start from scratch, remove the file. [131092100060] |Also beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explanatory message. [131092100070] |The same happens when the file is smaller on the server than locally (presumably because it was changed on the server since your last download attempt)---because ''continuing'' is not meaningful, no download occurs. [131092100080] |On the other side of the coin, while using -c, any file that's bigger on the server than locally will be considered an incomplete download and only "(length(remote) - length(local))" bytes will be downloaded and tacked onto the end of the local file. [131092100090] |This behavior can be desirable in certain cases---for instance, you can use wget -c to download just the new portion that's been appended to a data collection or log file. [131092100100] |To my knowledge it should skip files that are already downloaded and of the same size. [131092110010] |vim: delete lines before cursor [131092110020] |We can delete lines after the cursor (e.g.: the next 3 lines) with: [131092110030] |But how can we delete the lines before the cursor? (e.g.: 3 lines before cursor)? [131092120010] |same effect as 3dd but upwards. [131092130010] |privoxy: rewrite html "http" links to "https" [131092130020] |I'm using the Privoxy proxy on my PC. [131092130030] |What is the rewrite rule in the user.action file to rewrite e.g.: http://foo.org to https://foo.org? [131092130040] |Note that I want to rewrite, not redirect. [131092130050] |So if I search google for foo.org then on the search page there would be https://foo.org. [131092130060] |Would the rewrite work on e.g.: https://encrypted.google.com/? [131092130070] |Or is redirecting better because there could be e.g.: ? [131092140010] |The reason why you need to redirect that URL rather than rewrite is because you are visiting an unencrypted web page with the http:// (plaintext) URL, and the proxy needs to tell the browser to talk to the https:// URL. [131092140020] |If the connection was simply redirected at the SSL port, your browser wouldn't know what to do with an SSL response if it were somehow directed to the secure port using the HTTP protocol. [131092140030] |(Sadly, I'm not sure if anyone uses http-starttls, which should be able to handle that, but that's a separate question) [131092140040] |By using a redirect, the proxy uses HTTP return codes to tell the browser to open a new connection, using HTTPS instead of HTTP. [131092150010] |Two networks cannot be accessed simultaneously [131092150020] |I have two networks on a server. [131092150030] |One being my internal network, and the other being an external IP address. [131092150040] |This is on Debian Lenny. [131092150050] |Here is my /etc/network/interfaces file: [131092150060] |I can reboot my system and sometimes eth1 is accessible from SSH, and other times eth0 is accessible. [131092150070] |Then sometimes eth1 will just stop being pingable alltogether. [131092150080] |This is a fairly fresh install of Debian, and the only thing I have running is VMWare Server 2.0, bridged to both of my network connections. [131092160010] |You've defined a gateway on both interfaces. [131092160020] |So there is a default route through both interfaces. [131092160030] |I'm not sure what exactly happens in this case, but I doubt this is what you intended. [131092160040] |I suspect that only a smaller network should be accessible through eth0. [131092160050] |You can do this by changing the corresponding stanza like this: [131092170010] |Mnemonics for Unix functions? [131092170020] |Does anyone have any useful mnemonics for remembering the order of function parameters or the return values of Unix system calls? [131092170030] |I am suffering from "memory leaks". [131092180010] |
  • Move cursor over syscall name
  • [131092180020] |
  • Press 'K'
  • [131092180030] |(Prerequisite: vi.) [131092190010] |I use -h or --help or -?. [131092190020] |Or sometimes man command. [131092200010] |

    1. command -h

    [131092200020] |If you don't know and the following rules of thumbs don't work, use -h for help; it works 97%1 of the time. [131092200030] |Other help flags possibles: --help, -? (/?, /h) [131092200040] |

    2. command --flag1 --flag2 arg1 arg2 file1 file2 (ClOverleAF )

    [131092200050] |Usually flags (options) are before arguments and files list. [131092200060] |/me making a mnemonics: grep 'c.*o.*a.*f' /usr/share/dict/words [131092200070] |ClOverleAF (command, option, args, files) [131092200080] |
  • Examples: [131092200090] |
  • grep -ri text dir1 dir2
  • [131092200100] |
  • awk '{ print $2 }' file1 file2
  • [131092200110] |
  • Exceptions: [131092200120] |
  • inverted args and files: find dir1 dir2 -name '*.bar'
  • [131092200130] |

    3. command source1 source2 destination

    [131092200140] |Command to move/copy/link/... usually have the source then the destination in that order on the line. [131092200150] |
  • Examples: [131092200160] |
  • ln -s source destination
  • [131092200170] |
  • Exception: [131092200180] |
  • dd if=source of=destination
  • [131092200190] |

    4. command1 | command2 -

    [131092200200] |Some commands can use standard input/output as if it was a file. [131092200210] |
  • Examples: [131092200220] |
  • ls | vim -
  • [131092200230] |
  • dd if=/dev/sda | file -
  • [131092200240] |
  • wget -q -O - http://unix.stackexchange.com | grep ''</code></li> </li> [131092200250] |<sup>1</sup> result from a long and painful personal analysis. [131092200260] |After 10 years of man pages reading, I came to this accurate equation: <code>(100-1d6)%</code> [131092210010] |Most common syscalls - <strong>read(2)</strong> and <strong>write(2)</strong> takes 3 parameters: descriptor, buffer and length. [131092210020] |Returns number of bytes actually read or written. <strong>close(2)</strong>, obviously, takes one parameter - descriptor to close. [131092210030] |Most syscalls return -1 in case of error and sets <code>errno</code>. [131092210040] |Everything else I usually read in corresponding man page. [131092210050] |Just don't forget the command: <code>man 2 syscall_name</code> [131092210060] |P.S.: do you have <strong>intro(2)</strong> ? [131092220010] |This is a common problem for most developers. [131092220020] |If you write code often you will eventually find some patterns that you can use as mnemonics, for example file descriptors are usually the first parameter. [131092220030] |But there will always be annoying exceptions hard to memorize. [131092220040] |You are approaching the problem the wrong way. [131092220050] |There is a good reason why so many sophisticated development tools exist. [131092220060] |Instead of making your life harder, start using a <a>specialized source code editor</a> or an <a>integrated development environment</a>. [131092220070] |Some of the standard features (Auto-completion lists, realtime syntax checking, documentation tooltips) will eliminate your problem, taking away a big overhead for you. [131092220080] |After all, that's what computers are for, doing the boring repetitive tasks, so you can focus on the interesting stuff. [131092230010] |<title>Cannot install fedora 14 using bootable usb [131092230020] |i am not able to install fedora 14 using bootable usb from windows [131092230030] |the usb is plugged in. the boot order is too changed from hard drive to usb [131092230040] |the iso used is ubuntu-10.10-desktop-i386.iso i used LiveUSB Creator,Universal-USB-Installer-1.8.1.7,unetbootin-windows-494 for making a bootable usb. still i am not able to install fedora At boot time [131092230050] |
  • first time ,there was no message and there was just a cursor blinking at leftmost top corner
  • [131092230060] |2. [131092230070] |second time,the message which appears on screen is cannot find linux image [131092230080] |i followed the http://fedoraproject.org/wiki/FedoraLiveCD/USBHowTo to make a bootable usb [131092240010] |How can I pause in a shell script? [131092240020] |How can I make my shell script pause before continuing? [131092250010] |You mean sleep? [131092250020] |Or do you want to have something that waits for input before continuing? [131092250030] |You can do that with a read call. [131092260010] |He might also be looking for CTRL-Z, which pauses the current process. [131092270010] |Permissions: What's the right way to give Apache more user permissions? [131092270020] |Context: I am making an in-browser control panel that gives me one button access to a library of scripts (.sh and .php) that I've written to process various kinds of data for a project. [131092270030] |It's a "one stop shop" for managing data for this project. [131092270040] |I've made good progress. [131092270050] |I have apache, PHP and MySQL running, and I have my front end up at http://localhost. [131092270060] |Good so far! [131092270070] |Now the problem I'm having: I have an index.php which works fine, except the default apache user (which on my machine is called "_www") seemingly doesn't have permissions to run some of my scripts. [131092270080] |So when I do: [131092270090] |I get the output of ls and whoami, but I get nothing back from the custom script. [131092270100] |If I run the custom script as me (in an interactive shell), of course it works. [131092270110] |Finally, my question: What's the right way to configure this. [131092270120] |Have the webserver run as me? [131092270130] |Or change permissions so that _www can run my custom scripts? [131092270140] |Thanks in advance for any help. [131092270150] |I'm not an advanced Unix user, so sorry if this is a dumb question! [131092280010] |The first-best thing would be to put the script in a standard location (such as /usr/local/bin) where the web server would have sufficient permissions to execute it. [131092280020] |If that's not an option, you can change the group of the script using chgrp groupname path, then make it executable for the group by chmod g+x path. [131092280030] |If the _www user isn't already in that group, add it to the group by usermod -aG groupname _www. [131092290010] |To answer your question, it's better to give the _www group permission to execute your scripts. [131092290020] |Use an ACL to extend the permissions on your *.sh scripts to allow a user in the _www group execute privilege: [131092290030] |Also check each directory component of /Path/To/Custom &verify that apache has permission to access (i.e. 'see') the scripts in /Path/To/Custom: [131092290040] |Each directory component above should grant apache a minimum of execute permission apart from the final component (Custom) where apache needs both execute &read permission. e.g. if all the directory components above have other permissions of r-x then apache has all the access rights it needs to find your scripts in the Custom directory. [131092300010] |Which services should be disabled? [131092300020] |Hello, [131092300030] |Following is the output of $chkconfig | grep 5:on on my laptop running Fedora 14. [131092300040] |I don't use NM for connecting to the Internet. [131092300050] |So I think that should be stopped right away. [131092300060] |Also I have ext4 filesystem so I assume lvm2-monitor can be safely turned off. [131092300070] |My primary usage is surfing net and coding in Python (newbie though). [131092300080] |Which services should I disable so that unnecessarily resources don't remain busy? [131092300090] |Thanks. [131092310010] |It's possible (and likely, if you didn't specify otherwise in the installer) that you are still using LVM with ext4 on the logical volumes, however, lvm2-monitor is really only useful if you're using LVM snapshots and/or mirrors, so it is safe to trun off. [131092310020] |Are you using NFS in any way? [131092310030] |If not, you can probably safely turn off the netfs, nfslock and rpc* services. [131092310040] |Do you use any mDNS (or ZeroConf) devices? [131092310050] |Avahi-daemon both registers your computer as a mdns device and enables your system to search for similar devices. [131092310060] |If you don't plan on ever using that, you can disable it. [131092310070] |The other services are fairly normal to have running (like rsyslog), or are simply startup processes that don't leave around running processes (like smolt and udev-post). [131092320010] |You can do without NetworkManager, but I find it awfully handy for dealing with changing wifi on a laptop (which you say you're using). [131092320020] |If you don't need it, though, no harm in turning this off. [131092320030] |This is probably what's making your power button work, and what makes the system suspend when you close the lid. [131092320040] |You can live without it, but probably don't want to. [131092320050] |This is the userspace part of the Linux Auditing System, which is a more secure way of logging kernel-level events than syslog. [131092320060] |Among other things, it records SELinux alerts. [131092320070] |Strictly speaking, you don't need it. [131092320080] |This is for autodiscovery of services on a network — printers being a big example. [131092320090] |It's not required. [131092320100] |This will probably just start the right in-kernel CPU frequency scaling driver as an on-start operation, and not run anything. [131092320110] |(And if it can't for whatever reason and runs the daemon, you probably want it.) [131092320120] |This runs hald, which is in the process of being obsoleted but which is, as of Fedora 14, still used for a few things. [131092320130] |Best to leave it on for now [131092320140] |This sets up the kernel-level packet filter and doesn't leave any user-space daemon running. [131092320150] |Leave it on. [131092320160] |This is for multi-cpu/multi-core systems. [131092320170] |If you just have one, it will exit harmlessly after a few seconds. [131092320180] |You can gain a few milliseconds of startup time by chkconfiging it off. [131092320190] |If you're sure you're not using lvm (note that you can use ext4 on top of lvm!), you can turn off lvm2-monitor, and the same goes for md software RAID and mdmonitor. [131092320200] |This is the d-bus system message bus. [131092320210] |If you're using a modern desktop environment, you'll basically need this. [131092320220] |If you're not, you can get away without it, but will probably have to hack things up. [131092320230] |(I'm pretty sure gdm needs it, for example.) [131092320240] |This doesn't run any daemons, but starts any network filesystems in /etc/fstab/. [131092320250] |It's harmless either way. [131092320260] |If you're not using NFS, NIS, or some other RPC-based service, all of these can go off. [131092320270] |You technically don't need to log anything, but you probably really want to. [131092320280] |You could consider tuning it to work in a more lightweight way on your laptop. [131092320290] |This sends anonymized usage statistics back to the Fedora Project. [131092320300] |It doesn't run anything, but there's a cron file in /etc/cron.d/smolt which checks the state here. [131092320310] |If you don't want it, I suggest removing the entire smolt package. [131092320320] |(But consider leaving it — the data is useful to the people putting the distro together for you, and it's only once a month.) [131092320330] |Another run-and-done startup script, this one needed to keep rules generated during the boot process around once the system is up. [131092320340] |Leave it on. [131092330010] |Ubuntu 10.10 installed from windows 7: how to expand drive? [131092330020] |I'm new to Linux. [131092330030] |I've installed Ubuntu 10.10 from Windows 7. [131092330040] |It's not in Virtual PC, it is independent. [131092330050] |It uses a virtual drive in c:, I have chosen it to be 10 Gb. [131092330060] |Now I want to increase the size of this drive. [131092330070] |How to do this? [131092330080] |I've configured the system for PHP MySQL and installed a lot of software and fixed wireless connection problems a lot of things i don't want to loose these things and start troubling again. [131092330090] |I heard of backups, but I think it will take too long. [131092330100] |Is there any other simple and fast way? [131092340010] |In other words, you're using Wubi, right? [131092340020] |As far as I know, it is currently not possible to resize a Wubi installation of Ubuntu 10.04 or 10.10. [131092340030] |What you can do is add another virtual disk and mount it on /home or /srv, wherever you need room. [131092340040] |There are instructions in the Wubi guide. [131092340050] |In a nutshell: download the wubi-add-virtual-disk script, and run the following command in a terminal (the number is the size of the new virtual disk): [131092340060] |I recommend moving your installation to a real partition. [131092340070] |It'll be less hassle in the long term. [131092340080] |In your situation, the route I recommend is: [131092340090] |
  • Boot an Ubuntu 10.10 installation CD/USB, and perform a quick installation on a separate partition. [131092340100] |Don't bother configuring anything.
  • [131092340110] |
  • Boot your existing Wubi installation. [131092340120] |Mount your new direct-to-partition installation. [131092340130] |Let's call the mount point /media/new. [131092340140] |Open a terminal and run the following commands to overwrite the new partition with your existing data from the Wubi installation, and set up the bootloader for the new partition. [131092340150] |
  • Open both /media/new/etc/fstab and /var/tmp/fstab.new in an editor. [131092340160] |In each file, there is a line with a single / in the second column. [131092340170] |Replace the line in /media/new/etc/fstab with the one from /var/tmp/fstab.new.
  • [131092340180] |
  • Reboot. [131092340190] |You should now be in the new installation. [131092340200] |Make sure everything is ok, then you can remove the Wubi files on the Windows partition.
  • [131092340210] |Keep a bootable Ubuntu CD/USB at hand, in case something goes wrong. [131092340220] |Whatever you do, make backups. [131092340230] |Making backups is the only way not to lose data. [131092350010] |How to list files sorted by modification date recursively (no stat command available!) [131092350020] |How can I get the list of all files under current directory along with their modification date and sorted by that date? [131092350030] |Now I know how to achieve that with find, stat and sort, but for some weird reason the stat is not installed on the box and it's unlikely that I can get it installed. [131092350040] |Any other option? [131092350050] |PS: gcc is not installed either [131092360010] |Assuming GNU find: [131092360020] |Change 1n,1 to 1nr,1 if you want the files listed most recent first. [131092360030] |If you don't have GNU find it becomes more difficult because ls's timestamp format varies so much (recently modified files have a different style of timestamp, for example). [131092370010] |My shortest method uses zsh: [131092370020] |If you have GNU find, make it print the file modification times and sort by that. [131092370030] |I assume there are no newlines in file names. [131092370040] |If you have Perl (again, assuming no newlines in file names): [131092370050] |If you have Python (again, assuming no newlines in file names): [131092370060] |If you have SSH access to that server, mount the directory over sshfs on a better-equipped machine: [131092370070] |With only POSIX tools, it's a lot more complicated, because there's no good way to find the modification time of a file. [131092370080] |The only standard way to retrieve a file's times is ls, and the output format is locale-dependent and hard to parse. [131092370090] |If you can write to the files, and you only care about regular files, and there are no newlines in file names, here's a horrible kludge: create hard links to all the files in a single directory, and sort them by modification time. [131092380010] |Print specific Exif image data values with exiv2 [131092380020] |How do I print the image Exif date with a tool like exiv2? [131092380030] |My goal is to write the image year and month into separate variables. [131092380040] |Do I really have to parse the output with regex or is there a alternative to something like this: [131092390010] |You can use the -g flag to output only the property you're interested in, and -Pv to print the value without any surrounding fluff. [131092390020] |The result is easy to parse. [131092390030] |It may also be helpful to change the file date to match the image date: exiv2 -T DSC_01234.NEF. [131092400010] |Bash: why isn't "set" behaving like I expect it to? [131092400020] |I have observed that only the values of exported variables are appended to the my PATH variable via my Mac OS X .bash_profile. [131092400030] |Whereas, the values of locally set variables cannot. [131092400040] |Why can't local variables be appended to the path? [131092400050] |“You've misinterpreted something, but what? [131092400060] |Post the code that puzzles you. – Gilles yesterday” [131092400070] |Please consider this snippet, where I set the MONGODB variable as: [131092400080] |I source .bash_profile in terminal. [131092400090] |I see the following echo ... [131092400100] |PATH=~/bin:/usr/bin:/usr/local/bin:/usr/local/sbin: [131092400110] |Whereas, if I export MONGODB instead, and source my .bash_profile, I see the following echo ... [131092400120] |PATH=~/bin:/usr/bin:/usr/local/bin:/usr/local/sbin:/usr/local/mongodb/bin [131092400130] |Perhaps, using "set" is improper? [131092410010] |I will try to explain how shell variables work. [131092410020] |It is absolutely possible to append local variables to an environment variable like PATH. [131092410030] |Every running process has a list of environment variables. [131092410040] |They are name=value pairs. [131092410050] |When a new process is created with fork() it inherits those variables (among other things like open files, user id, etc). [131092410060] |In contrast shell variables are an internal shell concept. [131092410070] |They are not inherited when creating a new process. [131092410080] |You can export shell variables and make them environment variables. [131092410090] |When in a shell script you write FOO='bar' that's a shell variable. [131092410100] |You can try creating 2 scripts: [131092410110] |When you execute the first script it sets an internal shell variable then calls fork(). [131092410120] |The parent shell process will wait() for the child to finish then the execution continues (if there are more commands). In the child process exec() is called to load a new shell. [131092410130] |This new process does not know about FOO. [131092410140] |If you modify the first script: [131092410150] |the FOO variable becomes part of the environment and inherited to the forked process. [131092410160] |It's important to note that the environment is not global. [131092410170] |Child processes can't affect their parent's environment variables. [131092410180] |Modifications in test4.sh are not visible in test3.sh. [131092410190] |Information simply does not go that way. [131092410200] |When the child process ends its environment is discarded. [131092410210] |Let's change test3.sh: [131092410220] |Source is an built-in shell command. [131092410230] |It tells the shell to open a file then read and execute its content. [131092410240] |There is only a single shell process. [131092410250] |This way the caller can see the modifications to the environment variables and even shell variables. [131092410260] |As you probably know PATH is a special environment variable which tells the shell where to look for other executables. [131092410270] |When a new login shell is started it automatically sources .bash_profile. [131092410280] |The variables declared in there will be visible. [131092410290] |However if in .bash_profile you call other scripts with sh the PATH you set in those scripts will be lost. [131092420010] |Perhaps, using "set" is improper? [131092420020] |Yes, there's your problem. set doesn't do what you might expect. [131092420030] |From the documentation: [131092420040] |This builtin is so complicated that it deserves its own section. set allows you to change the values of shell options and set the positional parameters, or to display the names and values of shell variables. [131092420050] |Note the conspicuous lack of "actually set shell variables" in that list of things it does. [131092420060] |Buried in all the docs, you'll find that what it's doing is setting the shell's positional parameters to the arguments you've given. [131092420070] |You're just giving one argument, all of MONGODB="/usr/local/mongodb/bin". [131092420080] |So $1 gets set to that (and $# gets set to 1, since there's just the one argument). [131092420090] |Score one for anti-mnemonic Unix command names, huh? [131092420100] |So anyway, try just: [131092420110] |and it'll work. [131092430010] |This is not a variable assignment. [131092430020] |(It is one in C shell (csh, tcsh), but not in Bourne-style shells (sh, ash, bash, ksh, zsh, …).) This is a call to the set built-in, which sets the positional parameters, i.e. $1, $2, etc. [131092430030] |Try running this command in a terminal, then echo $1. [131092430040] |To assign a value to a shell variable, just write [131092430050] |This creates a shell variable (also called a (named) parameter), which you can access with $MONGODB. [131092430060] |The variable remains internal to the shell unless you've exported it with export MONGODB. [131092430070] |If exported, the variable is also visible to all processes started by that shell, through the environment. [131092430080] |You can condense the assignment and the export into a single line: [131092430090] |For what you're doing, there doesn't seem to be a need for MONGODB outside the script, and PATH is already exported (once a variable is exported, if you assign a new value, it is reflected in the environment). [131092430100] |So you can write: