[131038740010] |Where are some good guides for making packages (deb, rpm, etc)? [131038740020] |I am looking for a succinct howto on the basics. [131038750010] |For RPM you can start with 'Maximum RPM' (download here), which is old, but very elaborate, a good place to start learning. [131038750020] |There is also a course on IBM DeveloperWorks, which is more of a tutorial. [131038750030] |Once you grasp the basics, you should try and read the packaging guidelines for Fedora or OpenSUSE (which are much alike), so you can see how packaging is actually done in real life. [131038750040] |I know that Ubuntu had a packagers class on IRC a while back, but I don't know about its current status. [131038750050] |Debian (and Ubuntu) packaging tutorials are abundant out there. [131038750060] |For Debian, too, read their packaging guidelines to see how it is actually done. [131038760010] |It is often best to learn how to package the specific type of thing you're packaging. [131038760020] |A Mono app is very different to a Python app, and you're best if you can learn information relevant to you, first and foremost. [131038770010] |The ubuntu packaging guide is a good introduction. [131038770020] |The rest you can learn by studying existing packages, and reading manuals (CDBS, and of course Debian Policy). [131038770030] |However, as directhex said, it depends a lot on the kind of package you work on. [131038770040] |For RPM, I liked the Mandriva wiki, and some Fedora RPM Guide and Guidelines. [131038780010] |On FreeBSD, for an installed port: [131038780020] |or [131038780030] |The first one makes a package from the port while the second also includes all dependencies. [131038780040] |Alternatively, you can gain more control by using pkg_create. [131038780050] |Like make package it also requires the port to be installed: [131038780060] |Unfortunately there is no clean and easy way to make a package without first installing it unless you delve into the nitty-gritty of ports maintenance and package creation which you can read about here. [131038780070] |This will be necessary if you want to package something you've written yourself. [131038780080] |There are, however, a few alternatives to make life easier if you need to make software packages that aren't installed on your system. [131038780090] |The first is to use a build jail. [131038780100] |Alternatively (or concurrently), you can also just remove the software you install: [131038780110] |from the port directory, or [131038780120] |which provides more control (the -r switch removes dependencies as well). [131038780130] |See the man pages for ports, pkg_delete and pkg_create for details. [131038790010] |How to write a script to execute files in multiple directories [131038790020] |How do I write a script to execute the files in multiple directories? [131038790030] |The problem is this: I have many directories, and each has a data file to be read and analyzed by a python script (say, a.py). [131038790040] |I don't want to "cd" to each of the directories and type "a.py". [131038790050] |Outputs are saved in each directory. [131038800010] |You can probably just use a for loop: [131038800020] |It will run pushd $i; a.py; popd with $i set to first_dir, then again with $i as second_dir, and finally $i as third_dir. pushd switches to the given directory, and popd switches back to where you were [131038810010] |find will work magic for you. [131038810020] |The find searches recursively in all subdirectories for files that match a set of rules and performs an action on them. [131038810030] |The -name rule will let you find files with a name that matches what you give it. [131038810040] |You can use globbing, for example, "*.dat" would find all the .dat files. [131038810050] |If necessary, you can use -regex instead of -name to match with a regex pattern instead of a glob pattern, so you could do ".*\.dat$" to match all the .dat files. [131038810060] |The -execdir will execute whatever command you give it from the directory of the found file, replacing "{}" with the found file. [131038820010] |How do I extract a specific directory from a tarball? and strip a leading directory? [131038820020] |I want to extract a specific directory from the wordpress tarball. specifically wp-includes to .. [131038820030] |It appears that the directory structure inside the tarball is wordpress/wp-includes but I just need./wp-includes once it's been extracted, no leading wordpress directory. [131038820040] |How would I do this? [131038830010] |Assuming you have GNU tar, you can use --strip-components: [131038830020] |I believe current versions of BSD tar also support --strip-components. [131038830030] |In the worst case, you could do: [131038840010] |To extract a specific directory (and its contents, recursively), just pass it as an extra argument on the command line. [131038840020] |With GNU tar, you can strip a leading directory with the --strip-components option (and more generally transform the file names with --transform). [131038840030] |On non-Linux systems, you can use pax (it's in POSIX, but some Linux distributions omit it from their default installation) and its -s option. [131038840040] |You can merge the inclusion list with the rewriting rules by appending a rule to rewrite everything to the empty name (which means “don't extract”; the rule only applies if the previous rules didn't match). [131038850010] |Why does the linux kernel use linux-libre code to get rid of binary blobs? [131038850020] |I ask this question because I'm curious as to whether there is some sort of performance advantage offered from the binary blobs that are in the Linux kernel. [131038850030] |Since many of these blobs have been replaced with code in linux-libre, why has that same code not been incorporated into the Linux kernel at kernel.org? [131038860010] |The Linux-libre project is an extension of efforts by distributions aimed at people who wish to use completely free operating systems, as defined by the Free Software Foundation. [131038860020] |Currently it is maintained by FSFLA, the Latin American Free software Foundation. [131038860030] |According to the about page for the project: [131038860040] |Linux-libre is a project to maintain and publish 100% Free distributions of Linux, suitable for use in Free System Distributions, removing software that is included without source code, with obfuscated or obscured source code, under non-Free Software licenses, that do not permit you to change the software so that it does what you wish, and that induces or requires you to install additional pieces of non-Free Software. [131038860050] |A quick reading of the lastest version of the "deblobbing" script shows that it mostly removes the binary blobs and some documentation. [131038860060] |In many of the cases the binary blobs are either hardware drivers are firmware for hardware. [131038860070] |Firmware is code that needs to be loaded onto the device itself and is often needed even when a free software driver exists. [131038860080] |As far as I understand, there is no clear performance benefit from these blobs (although, without them, many people would have no performance) and most kernel developers would love to replace them with well-written, Free code. [131038860090] |In your question you claim that "many of these blobs have been replaced with code in linux-libre" and ask why this code hasn't been accepted. [131038860100] |In my reading of the scripts I could see very little code that was replaced. [131038860110] |Rather the majority of the script is removing code. [131038860120] |The code that is added is intended to "replace the requests for non-Free firmware with messages that inform users that the hardware in question is a trap." [131038860130] |(Linux Libre Release Accouncement) [131038860140] |If you have specific code in mind, please mention it in your question. [131038860150] |Most patches for Linux are discussed either on the Linux Kernel Mailing List or one of the many subsystem specific lists. [131038860160] |Often the reasons for non-inclusion can be found by searching through these lists. [131038870010] |How would you migrate from a Windows AD to a Linux LDAP server? [131038870020] |I currently run a Windows server with Active Directory. [131038870030] |But since we're no longer using Exchange 2007, it became a fancy file server with authentication. [131038870040] |I would like to move the AD to a Linux server. [131038870050] |What would be the best way to do this? [131038870060] |And which LDAP server should I use? [131038870070] |Update there won't be any Windows clients left. [131038870080] |They'll be updated to Edubuntu. [131038880010] |Samba v.3 is able to be a NT4 style domain controller. [131038880020] |If you had a AD server running for Exchange, that is not good enough. [131038880030] |Samba v.4 will be able to be a Windows 2003 style domain controller, but is not done yet. [131038880040] |Not by far. [131038880050] |Next question would be: do you have any Windows clients left? [131038880060] |If so, you have a problem. [131038880070] |Windows is not as pluggable as Linux. [131038880080] |While it is possible to change a certain dll file (I forgot the name) to authenticate against a generic KDC, Windows was built to work with AD and with AD alone. [131038880090] |Anything else requires altering Windows system dll's. [131038880100] |That sucks. [131038880110] |If you do not have any Windows clients left, it becomes a lot easier. [131038880120] |You can easily replace Windows AD with a combined Kerberos / LDAP solution. [131038880130] |Kerberos kdc (Key Distribution Center) packages are in all distro's. LDAP servers are available in a lot of different forms. [131038880140] |OpenLDAP server is in most distro's. A GUI based management tool for you LDAP directory is available from a lot of open source LDAP serers, like 389 and I think Apache DS too. [131038880150] |I mentioned the FreeIPA project in this context in another thread as an integrated solution, but it is only for Linux. [131038880160] |So, to make a long story short: do you have Windows clients on your network still? [131038880170] |Edit: Apparently not. [131038880180] |So, build yourself a KDC, grab a copy of 389 DS and you're good to go. [131038880190] |Then, you'll have to do some LDAP scripting to pull user information from the domain controller and insert it into your LDAP server. [131038880200] |I don't think you can migrate the users' passwords though, you will probably have to reset those. [131038890010] |Since you will migrate from a Windows based infrastructure to a Linux based one. [131038890020] |I think that in addition to the setup of the new LDAP servers, you will need to migrate the user account information. [131038890030] |If this is your case, maybe you could use the LDIFDE tool from the Windows AD Server to Export the required information. [131038890040] |After that, you would import that information to the new Directory. [131038900010] |TightVNC not running gnome-session [131038900020] |I have a Debian Lenny box with Gnome desktop. [131038900030] |I've installed Tightvnc server on it and would like to see a Gnome session when connecting from another computer using VNC viewer. [131038900040] |But for some reason it opens the "X Desktop" with only a terminal window visible. [131038900050] |What could be wrong? [131038900060] |I used these instructions for editing the configuration (~/.vnc/xstartup). [131038900070] |So looks like it's not recognizing gnome-session &and is falling back to generic session instead. [131038900080] |Why? [131038910010] |Since it wasn't a production box I just removed and then re-installed Gnome desktop environment. [131038910020] |After that RealVNC works without problems with Gnome. @Gilles - I came back here after the reinstall was already done so problem was solved. [131038910030] |But thanks for the advice. [131038920010] |Can I alter my Fedora LVM LV to install a new distro as a dual-boot? [131038920020] |My question is almost a duplicate of this question, but not quite because that one is about ext3 and I am already using LVM. [131038920030] |I have an older HP Pavilion laptop running Fedora 11. I chose Fedora because it was semi-compatible with the hardware and it ran VMware well... but since I no longer need VMware I am looking to test out other distros and find one that's more compatible. [131038920040] |(Specifically looking for software suspend support and maybe something more lightweight) [131038920050] |I'd like to try out a few new distros without hosing the existing (working) Fedora setup. [131038920060] |Since I am using LVM, is it possible to reduce the size of my LVM LV and then install new distros into the volgroup, without the new distros destroying the Fedora setup? [131038920070] |Here's how my LVM is set up now: [131038920080] |Are there distros which will allow me to install into a new logical volume without destroying the existing one? [131038920090] |If so, which ones, and how would I go about making room for the new LV? [131038930010] |I don't know if that functionality is offered by typical installers, but it is easy enough to do from a live CD (or live USB or whatever). [131038930020] |Both SystemRescueCD and GParted Live have the required tools readily available (there are undoubtedly many other suitable live distributions). [131038930030] |Note that you need to boot from a separate system as ext3 filesystems cannot be shrunk while mounted. [131038930040] |You can use the GParted GUI to shrink the filesystem by up to 20GB or so, and resize the existing logical volume accordingly. [131038930050] |Then, when you install another distribution, you will be able to create a logical volume in the free space. [131038930060] |Note that not all distributions support installing to a logical volume (all the “serious” ones do, of course); for Ubuntu, you need the server installer (as opposed to the desktop installer with snazzy graphics but fewer options). [131038930070] |If you can't or don't want to use a GUI, here's an overview of how to do this on the command line: [131038930080] |
  • pvscan to detect physical volumes (if not already done during boot).
  • [131038930090] |
  • vgimport vg_token to import the volume group (ditto).
  • [131038930100] |
  • vgchange -ay vg_token to make the logical volumes accessible.
  • [131038930110] |
  • resize2fs /dev/vg_token/lv_root 72G (or whatever size you decide on).
  • [131038930120] |
  • lvreduce -L 72g /dev/vg_token/lv_root (this must be the same size of the filesystem; remember that with LVM tools, lowercase units are binary (k=1024) and uppercase units are decimal (K=1000)).
  • [131038930130] |
  • vgchange -an vg_token; vgexport vg_token; reboot.
  • [131038940010] |Calculate md5sum of a CD/DVD [131038940020] |I have an ISO file, which I burned to a CD. [131038940030] |Now how can I check if the CD is correctly created? [131038940040] |I would like a command that calculate the hash sum that I can use to check with the hash sum I calculate on the ISO file. [131038940050] |Ideally the command should: [131038940060] |
  • Work regardless of the ISO file: that is, I don't want to keep a list of hash sum for each file in the disc, or remember the number of blocks whatever
  • [131038940070] |
  • Be relatively short: a one-line command is great, a chain of commands which is two line long is OK, a script that span one page is not
  • [131038940080] |
  • Be fairly efficient: for example, dd the disc back to a file then run md5sum on the file is unacceptable
  • [131038940090] |If there is no answer that cannot satisfy all I will appreciate the nearest match too. [131038940100] |Even better if you can tell me why it is not so straight-forward. [131038950010] |The basic problem is that we want to take the md5sum of the exact same information that was on the ISO originally. [131038950020] |When you write the ISO to a CD, there is likely blank space on the end of the disk, which inevitably changes the md5sum. [131038950030] |Thus, the the very shortest way: [131038950040] |doesn't work. [131038950050] |What does work (and is common in online documentation) is only reading the exact number of bytes from the device and then doing the md5sum. [131038950060] |If you know the number of bytes you can do something like: [131038950070] |where 'xxxxx' is the size of the iso in bytes. [131038950080] |If you don't know the number of bytes off hand, but have the iso on your disk still, you can get them using ls by doing the something like the following (taken from here): [131038950090] |There are many other one-line constructions that should work. [131038950100] |Notice that in each case we are using dd to read the bytes from the disk, but we aren't piping these to a file, rather, we are handing them to md5sum straight away. [131038950110] |Possible speed improvements can be made by doing some calculations to use a bigger block size (the bs= in the dd command). [131038960010] |AWK: Keep lines of at most 72 chars' length [131038960020] |ie I want it to add \n after 72 chars and continue, so initially you may need to remove all single \ns and the add them. [131038960030] |It may be easier be easier with other tool but let's give a try to awk. [131038960040] |[Update] [131038960050] |Williamson provided the right answer but some help needed to read it. [131038960060] |I break the problem into parts with simpler examples, below. [131038960070] |
  • Why does the code below print \t in both cases, gsub should substitute things? x is a dummy-file, some odd 0 at the end.
  • [131038960080] |
  • Attacking the line line = $0 \n more = getline \n gsub("\t"," ") in Williamson's reply, line apparently gets whole stdout while more gets popped value of $0, right?
  • [131038960090] |Code to part 1 [131038970010] |Not using awk [131038970020] |I understand this may just be one part of a larger problem you are trying to solve using awk or simply an attempt to understand awk better, but if you really just want to keep your line length to 72 columns, there is a much better tool. [131038970030] |The fmt tool was designed with specifically this in mind: [131038970040] |fmt will also try hard to break the lines in reasonable places, making the output nicer to read. [131038970050] |See the info page for more details about what fmt considers "reasonable places." [131038980010] |Awk is a Turing-complete language, and not a particularly obfuscated one, so it's easy enough to truncate lines. [131038980020] |Here's a straightforward imperative version. [131038980030] |If you want to truncate lines between words, you can code it up in awk, but recognizing words is a non-trivial (for reasons having more to do with natural languages than algorithmic difficulty). [131038980040] |Many systems have a utility called fmt that does just that. [131038990010] |Here is an AWK script that wraps long lines and re-wraps the remainders as well as short lines: [131038990020] |There is a Perl script available on CPAN which does a very nice job of reformatting text. [131038990030] |It's called paradj (individual files). [131038990040] |In order to do hyphenation, you will also need TeX::Hyphen. [131038990050] |Here is a diff of some changes I made to support a left-margin option: [131039000010] |All but MPlayer and VLC unable to play MP4 video? [131039000020] |I have a specific MP4 video file that plays fine in MPlayer and VLC, but other programs on my Arch Linux box (Banshee, Gnome's Movie Player) are unable to play it. [131039000030] |Most seem to treat it as having zero length. [131039000040] |It also causes Gnome's Properties dialog to just display a dialog saying "Creating Properties Window" and never load. [131039000050] |What is it about this file that causes it to do this? [131039010010] |You've installed the needed gstreamer plugins? (don't know exactly which, maybe gstreamer0.10-bad or gstreamer0.10-ffmpeg) ... [131039020010] |This appears to be because of a misencoded file. [131039020020] |Encoding with a different application than originally used did not have the same result. [131039030010] |messed up bottom panel - Fedora 14 KDE [131039030020] |I just started using Fedora 14 KDE (installed it yesterday). [131039030030] |I'm coming from Windows, and the bottom "panel" was very convenient in its similarity to the Windows taskbar. [131039030040] |Unfortunately, I clicked an option, and now I removed the portion of the bar which has my currently open programs. [131039030050] |I managed to get it back on the top of the screen, but I want it back at the bottom. [131039030060] |Basically, all the small icons in the right are now over by the application launcher icon on the left. [131039030070] |I have played around with adding widgets, but I couldn't figure anything out. [131039030080] |I just want the way it was by default after installation. [131039030090] |How do I get it back? [131039030100] |Thank you! [131039040010] |The widgets on the kde panels can be a pain in the ass sometimes. [131039040020] |You might be better off removing them all and re-adding them. [131039040030] |I've never really found a pattern for how they want to align themselves. [131039040040] |Also, go wildcats ^_^ [131039050010] |I believe you are looking for the Task Manager widget. [131039050020] |Click the cashew in the right corner of the panel. click add widgets. search for task manager. hover over the new task manager added to the panel. you should see a + like set of arrows, use that to adjust the position of the task manager widget to where you want it, left or right. [131039050030] |Once it's there, right click on the cashew -> click lock widgets. [131039050040] |As far as resetting it I do believe there is a configuration file... [131039050050] |though it seems to contain the settings for every widget, panel etc, I would definitely back it up before experimenting. [131039050060] |If this doesn't work for you perhaps you could upload a screenshot to SE and show us what you're dealing with. [131039060010] |i dunno if yo on about the same as i had, i closed the whole panel (removed it).. [131039060020] |discovered that if i right clicked the top panel and added another it automatically placed an empty one on the bottom f the screen.. to which i could add more stuff to panel again from there...brilliant. [131039060030] |though i'm not sure if i'm kde, might be gnome.... but its a cool answer for those on gnome desktop. [131039070010] |Is there a way to redirect a file that is written [131039070020] |I have an application which I can not change, that writes to a big 250 GB plain text file. [131039070030] |After compressing with GZ it is only 30 GB. [131039070040] |This application has no option of compressing it's output, and it can only write to a file name (not to stdout). [131039070050] |Is there a way that I can set up for the output to be compressed immediately, without first storing the 250 GB file on my disk? [131039070060] |I also need the other way around, to fool the application so that it reads a plaintext file which in reality is compressed. [131039080010] |
  • Use can use mkfifo to setup a named pipe in which the program will write, while you gzip from that pipe into final destination.
  • [131039080020] |
  • If you may pass the output filename to the program on command line, then this (probably bash-specific) trick should also work: program >(gzip - >output.gz) as this will be translated by the shell into something like gzip - output.gz &; program /dev/fd/63.
  • [131039090010] |You may have success using /dev/stdout as the filename and piping the output of your application to gzip. [131039090020] |/dev/stdout is a symlink to /proc/self/fd/1. [131039090030] |Similarly, you may be able to use /dev/stdin as a filename and pipe the output of gzip to the application. [131039090040] |I say may, because the application may be expecting a seekable file that it writes to (reads from), but /dev/std{in,out} will not be seekable. [131039090050] |If this is the case then you are probably lost. [131039090060] |You will need to use a seekable file as the target for the application. [131039100010] |Originally, I thought, sure that's easy: just mount a loopback device with a compressed filesystem where the program expects to write to. [131039100020] |Unfortunately, upon searching, I found that there aren't many read/write filesystems, and what's there (jffs2) can't be mounted via a loopback device. [131039100030] |I did find FuseCompress which may be what you're looking for, but if you need high reliability, I'd skip it. [131039100040] |Another alternative would be to store the file on a USB hard drive, and make a symlink at the location the program writes to. [131039100050] |This may be too much of a hassle if you frequently work with the program or if you don't already have a 250GB+ USB drive hanging around. [131039110010] |If the application doesn't require its input and output to be seekable, pass it /dev/stdout or <(gunzip — see camh's answer and alex's answer. [131039110020] |If the application does require a seekable file, your best bet is a filesystem that implements compression. [131039110030] |There are a few unix filesystem implementations that support compression: [131039110040] |
  • Through FUSE, which is available on most unices, there are a few compression filesystems. [131039110050] |FuseCompress and CompFUSEd are two options, as well as the various archive filesystems.
  • [131039110060] |
  • Zfs supports everything including the kitchen sink and compression. [131039110070] |It's the native filesystem under Solaris these days (that where it came from). [131039110080] |It's available through FUSE at least on Linux. [131039110090] |FreeBSD and NetBSD have at least partial native implementations of zfs.
  • [131039110100] |
  • On Linux, there are patches floating around to implement compression on ext2 and derivatives. [131039110110] |I don't know how reliable they are or how compatible they are with ext3 and ext4.
  • [131039120010] |How can I make a program executable from everywhere [131039120020] |What should I do if I want to be able to run a given program regardless of my current directory? [131039120030] |Should I create a symbolic link to the program in the /bin folder? [131039130010] |If you want to run a command foo in the directory your shell is currently in, you basically have two options: [131039130020] |
  • Type ./foo at the shell prompt.
  • [131039130030] |
  • Add the . directory (. is a name for "the current directory") to the PATH environment variable; how you do this depends on the shell you are using: [131039130040] |
  • for Bourne-type shells (bash, zsh, ksh, etc.) you write (see this page for more information): [131039130050] |
  • for csh-type shells (tcsh, csh) you write (see this page for more information): [131039130060] |Note that 2. is a security risk on multi-user systems: imagine you cd to directory /tmp and a malicious user has created a malware binary named ls in there.. [131039140010] |If you just export PATH=$PATH:. at the command line it will only last for the length of the session though. [131039140020] |If you want to change it permanently add export PATH=$PATH:. to your ~/.bashrc file (just at the end is fine). [131039150010] |Placing a link to the file in the /bin directory isn't the best thing to do for multiple reasons. [131039150020] |
  • If the actual executable file is in a location that some users can't see or execute, they see it as a bad link or dysfunctional program.
  • [131039150030] |
  • The /bin directory is supposed to be reserved for programs which are required for running the system (things like chmod, mkdir, etc).
  • [131039150040] |You can actually place (install) the executable file in /usr/bin/ or even /usr/local/bin/. [131039150050] |Of course, you've manually installed the program at that point; your distribution isn't going to keep track of it the way it does the rest of your programs - you'll have to manually upgrade it when necessary and manually remove it if you want it gone. [131039150060] |Also, you'll have to know what packages it depends on (it sounds like you already use the program, so that's taken care of, but in general...). [131039150070] |Unless I'm setting up a program that I expect other users to use, that's not what I usually do: I create a bin directory just for me in my home directory, and I edit my shell profile to add ~/bin/ to my PATH environment variable. [131039150080] |I find it easier to keep track of the programs I've installed that way, because it is separated from the rest of the system. [131039160010] |The short answer is that to run the program, no matter what your directory, you need to have the program's directory in your search path. [131039160020] |The problem can be solved by putting the program into a folder thats already in that path, or by adding a new folder to the path - either will work. [131039160030] |The best answer depends on: [131039160040] |Is this program a downloaded program that you have compiled yourself from source? [131039160050] |Its quite likely will have an install mechanism already. [131039160060] |In the folder that you compiled the program, as root, run 'make install' [131039160070] |Is this program a downloaded program that you want to make available as part of the standard programs on the computer? [131039160080] |Makes sense to put this kind of application into a standard folder. its quite common to use directories such as /usr/local/bin for such programs. [131039160090] |You will need root access to do this. [131039160100] |This is a program that you have written for yourself and/or you have no special privilages on the computer. [131039160110] |Creating a folder in your home directory called 'bin', and placing the program in there. [131039160120] |You may need to edit your login script to add the full path to this folder (e.g. /usr/home/jeremy/bin) [131039160130] |Whilst you could just add its current directory to the search path, you will have to keep doing this with every new program - and is more work in the longer term. [131039170010] |Blacklisting websites for certain users [131039170020] |This question is two part. [131039170030] |First, If I had a list of websites that I would've liked to block. [131039170040] |How do I tell my computer do to block those and any relevant subdomains? [131039170050] |Secondly, How do I get this right on a per-user basis. [131039170060] |For Example, telling the computer to block userA from accessing facebook should not block userB from facebook. [131039170070] |Bonus points if the answer is a command-line one. [131039180010] |There are three parts in your question, in fact: [131039180020] |
  • Decide on a blocking strategy at the network level: what connections are allowed?
  • [131039180030] |
  • Implement that blocking strategy.
  • [131039180040] |
  • … in a way that only affects certain users.
  • [131039180050] |Blocking websites is not easy. [131039180060] |In fact, I would say that it's impossible to completely block a website without completely blocking network access. [131039180070] |All you can do is make the blocked user's life more difficult, but if they really want to they will be able to access the blocked site, with increased latency and decreased bandwidth, provided they have enough technical sophistication and possibly can rely on an outside server. [131039180080] |For ordinary browsing, users can look at cached copies on Google or otherwise. [131039180090] |Users who have an outside server can use it as a proxy, or they can use existing proxies (open proxies come and go too fast to block usefully). [131039180100] |You can try blocking by domain name or by IP address. [131039180110] |IP addresses might work for a big site like Facebook, although you'd have to keep up with all their server moves. [131039180120] |It won't work with smaller sites that are co-hosted. [131039180130] |A lightweight way to block some web sites is to block their DNS name resolution. [131039180140] |Just this is likely make the users' life annoying enough that they work around your block by using an external proxy (which does require some sophistication). [131039180150] |But there's no practical way of tuning DNS resolution per-user (it's not impossible in principle, but you'd need to set up a working identd and find a DNS server that talks to it). [131039180160] |The natural way to block web sites is to block direct web access and allow only access through a web proxy. [131039180170] |Squid is the de facto standard. [131039180180] |You can set it up as a transparent proxy (all connections on ports 80 and 443 are routed to the proxy machine; the odd website on another port may or may not work depending on how you configure your firewall) or as an explicit proxy (users must configure their browser; only the machine with the proxy can connect to the outside). [131039180190] |An easy way of implementing per-user settings is to require authentication in the proxy. [131039180200] |Then having different levels of access is a job for the proxy. [131039180210] |To avoid the password requirement, you can also make the proxy use ident (though this adds latency for all accesses). [131039180220] |Your task will be easier if you can run the proxy on a different machine (it can be a virtual machine). [131039180230] |Doing everything on the same machine is possible but complicated on Linux, and I suspect it's also possible-but-complicated on other unices. [131039190010] |Wine cdrom mount locations on Linux Mint 10 (Ubuntu 10.10) -- problems post install/switching discs [131039190020] |I've been trying to install a few windows games with wine on Linux Mint 10 (based on ubuntu 10.10), but I am finding that my cd-rom mount points aren't standard. [131039190030] |ie: rather than /media/cdrom I get /media/disc_name [ a la /media/Warcraft\ III] [131039190040] |This seems to cause problems during the installation of multi-disc games, and games which require the disc to be in the drive after installation. [131039190050] |In both cases the target disc cannot be found, even when the mount point is verified to match the original installation source, or updated in the installer to match the location of the second disc due to autonaming. [131039190060] |Any ideas what I could do here? [131039190070] |Near all cases result in a file not found error. [131039200010] |If you don't mind mounting the disks manually, add the following line to /etc/fstab: [131039200020] |Then you can use mount /media/cdrom when you insert a disk, and umount /media/cdrom before ejecting it. [131039200030] |You'll need to either figure out how to disable any automatic mounting, or undo that automatic mounting (with umount). [131039200040] |You can also move a mount point with mount --move ' /media/Warcraft III' /media/cdrom (this needs to be run as root). [131039210010] |Does the Fedora installer not include default URLs for installation mirrors? [131039210020] |I installed Fedora 14 using a slightly non-standard method, i.e. loading the install media vmlinuz/initrd.img files via an existing grub2 instance. [131039210030] |(I fetched them from a mirror). [131039210040] |The installer works fine, but I was a little bit surprised that after selecting the network install route I had to manually enter the URL of a FC14 mirror. [131039210050] |Luckily, I am having a secondary computer with network access available for looking up mirror URLs. [131039210060] |Does the FC14 not include any default install mirror urls? [131039210070] |Or am I missing something? [131039220010] |Simple answer is no, and this goes back to at least 2005, if you are doing this en-mass with grub, then you should still be able to specify in the boot options the paths to mirror, just like you can with specifying a path to a Kickstart file. [131039220020] |Some brilliant examples of how to do this can be found on the Fedora Infrastructure wiki pages, mainly just -x "method= should do the trick. [131039230010] |Are packages cryptographically signed in Fedora 14? [131039230020] |I am installing Fedora 14 and I am wondering if [131039230030] |
  • the Fedora packages are cryptographically signed
  • [131039230040] |
  • package signatures are checked by the installer by default
  • [131039230050] |
  • package signatures are checked by yum when installing additional packages or doing upgrades
  • [131039240010] |According to the Fedora Documenation: [131039240020] |All Fedora packages are signed with the Fedora GPG key. [131039240030] |GPG stands for GNU Privacy Guard, or GnuPG, a free software package used for ensuring the authenticity of distributed files. [131039240040] |For example, a private key (secret key) locks the package while the public key unlocks and verifies the package. [131039240050] |If the public key distributed by Fedora does not match the private key during RPM verification, the package may have been altered and therefore cannot be trusted. [131039240060] |The RPM utility within Fedora automatically tries to verify the GPG signature of an RPM package before installing it. [131039240070] |If the Fedora GPG key is not installed, install it from a secure, static location, such as an Fedora installation CD-ROM or DVD. [131039240080] |Further, according to the Yum Documentation: [131039240090] |Yum provides secure package management by enabling GPG (Gnu Privacy Guard; also known as GnuPG) signature verification on GPG-signed packages to be turned on for all package repositories (i.e. package sources), or for individual repositories. [131039240100] |When signature verification is enabled, Yum will refuse to install any packages not GPG-signed with the correct key for that repository. [131039240110] |This means that you can trust that the RPM packages you download and install on your system are from a trusted source, such as The Fedora Project, and were not modified during transfer. [131039240120] |On a freshly installed Fedora 14 system, the /etc/yum.conf includes [131039240130] |Indicating that this feature of yum is enabled by default. [131039240140] |Thus it seems that the answers to (1) and (3) are "Yes". I believe the answer to (2) is more complicated. [131039240150] |Both the DVD and the Live CD have the ability to verify the entire disk. [131039240160] |If you are concerned with the integrity of your install media, you can use this built-in functionality (see the documentation). [131039240170] |If you are more concerned with security, you may want to verify the ISO before burning, using the method provided here: [131039240180] |https://fedoraproject.org/en/verify [131039240190] |(See this question for tips on how to get the checksum for already burnt CD.) [131039240200] |UPDATED: If you install using the Live CD, I believe that the live image is being copied directly to your disk, thus the packages aren't being installed by the package manager and their signatures aren't being checked. [131039240210] |If you install using a Network Install or the full DVD, the GPG signatures still aren't checked. [131039240220] |See mattdm's answer. [131039250010] |The packages are cryptographically signed, and the yum package installer does check those signatures when you add packages after the fact. [131039250020] |The initial installer, however, does not check package signatures. [131039250030] |This is a difficult problem, because: how to you verify that the cryptographic signatures you have on your install media are good when you don't, by definition, trust that install media? [131039250040] |See this Fedora bugzilla entry for history and details. [131039250050] |This is the oldest bug still open in Red Hat's database, and it's so old that it's only three digits. [131039250060] |(New bugs are now numbered well into the six hundred thousands.) [131039250070] |But, the entire install DVD is checksummed, and you can verify that that's good externally before starting your install against checksum files which are cryptographically signed. [131039250080] |So, if you're very concerned (and in this day and age, it's good to be), do a non-network install after verifying the ISO you download against the GPG key from the official Fedora Project web site. [131039250090] |So to answer your three questions: yes, sort of, and yes. [131039260010] |How to generate grub.conf from scratch in Fedora 14? [131039260020] |It seems that I hit following bug in Fedora 14 - I deselected installing grub into the MBR while using the graphical installer, because I want to use an existing grub instance. [131039260030] |Now, I want to look up the grub.conf or menu.lst in the new installed Fedora 14 system to adjust my existing grub config - but I cannot find them anywhere. [131039260040] |I just information about how grub-install and gruppy are not able to create a grub.conf from scratch. [131039260050] |Thus my question: How to generate a grub.conf from scratch in Fedora 14, if it is missing? [131039260060] |Update: The point is to get the set of default kernel options a grub.conf includes, when created by the Fedora installer. [131039260070] |I have booted the fedora installation via grub and only setting the 'ro root=' options which worked. [131039260080] |However, it would be great if someone could post a standard generated grub stanza from his/her Fedora 14 system to see the kernel parameter differences. [131039270010] |I'm not sure if Fedora 14 uses the legacy version of GRUB (0.97) or the new version (1.xx); you can check by running grub --version on the command line. [131039270020] |Either way the configuration file format is pretty simple; the legacy one is documented here, and the newer one here. [131039270030] |The newer one even comes with grub-mkconfig to generate a configuration file for you [131039280010] |The following is the grub.conf present immediately after installation of Fedora 14 within a VirtualBox VM. [131039280020] |During installation I used all of the default options. [131039290010] |How to disable multiple logins in GNOME? [131039290020] |Basically, I want to disable multiple logins. [131039290030] |The family laptop (not mine) has a problem with the x server -- at least it seems to be the x server. [131039290040] |When two users are logged in simultaneously the x-server fails to start for the one who was logged in first. [131039290050] |Having two users logged in seems to steal processor speed anyway. [131039290060] |So I want to disable people from choosing 'switch user' over 'log out (username)' or 'shut down'. [131039300010] |There is a setting /desktop/gnome/lockdown/disable_user_switching in GConf that allows you to disable the user switching. [131039300020] |You can change this setting by running gconf-editor from the Alt+F2 "Run" dialog (depending on your distro, it might also be available somewhere in the menus). [131039310010] |Building an appliance [131039310020] |I want to build an appliance for the kitchen. [131039310030] |It is supposed to play music (files, streams, CDA), surf the web (recipe sites!), maybe play video (files and streams) display todays agenda during breakfast. [131039310040] |Because I do not want it to run 24/7, it should boot up really fast. [131039310050] |I am thinking about a small SSD for the OS. [131039310060] |I also want to use a touch screen. [131039310070] |I will probably use NetBSD or Linux (not sure yet). [131039310080] |It really should get the feel of an appliance like a car GPS system or a smart phone. [131039310090] |Now here is the question: Is there some kind of interface that can be operated by touch screen (on screen keyboard?) and that can be tweaked to what I want to do with it? [131039310100] |Bonus question: Did you build such a system yourself? [131039310110] |How? [131039320010] |Have you looked at MeeGo, which is designed for touchscreen enviornments. [131039330010] |Good graphics card for TV-Out [131039330020] |I am looking for a graphics card (AGP) with a TV-Out (SVideo or Composite) that is not a PITA with X. I guess it depends on the driver/X-Server. [131039330030] |The card needs to be small and should not have a huge cooler mounted beause it is supposed to be plugged into a shuttle case. [131039330040] |The on board VIA UniChrome chipsets TV-Out is supposed to play well with Linux with the openChrome-Driver (www.openchrome.org seems to be down), but I could not get it to work. [131039330050] |So this are the conditions: [131039330060] |
  • Linux (current Ubuntu?)
  • [131039330070] |
  • AGP
  • [131039330080] |
  • short card
  • [131039330090] |
  • small cooler (possibly passive or really quiet)
  • [131039330100] |
  • TV-Out playing well with Linux and X
  • [131039330110] |
  • card should still be available through second hand sources (eBay?)
  • [131039330120] |Do you have any recommendations and/or experiences to share? [131039340010] |Back in the days of Pentium IIs and 3s I had several AGP Nvidia cards, from Riva TNT, GForce 2, GForce 3, GForce 4 MX ... all worked quite well on TV out. [131039340020] |In those days I had to manually configure X and restart the computer for TV out to work, but know you can just use the "xrandr" command with a couple of parameters and everything works fine. [131039350010] |In bash is if [ -z "$1" ] and if [ "$1" = "" ] the same? [131039350020] |What is the difference between [131039350030] |and [131039350040] |or are they the same? [131039350050] |I have found my self writing the second one in some scripts, but then I read here that the -z flag is does the same. [131039350060] |If there are some problems using the second one, should we ignore that the first one looks a little better? [131039360010] |[ "$1" = "" ] and [ -z "$1" ] are exactly equivalent in bash and other POSIX-compliant shells. [131039360020] |(Note that there must be a space on each side of the brackets, unless there is a non-word-constituent character like ;.) [131039360030] |[ is a shell built-in like any other; in fact it can also be spelled test (the only difference between the two is that [ requires a ] as the last argument). [131039360040] |So if you run [ "$1" = "" ] with $1 expanding to -z, the test operator sees three arguments: -z, = and the empty string. [131039360050] |Older shells in the Bourne sometimes threw parse errors when an operand looked like an operator in this wy, even if there was no ambiguity in the complete expression, although I don't know if any version did in fact have trouble with this particular expression. [131039360060] |There may also have been versions that had trouble with empty words; these would not have supported either [ "$1" ]. [131039360070] |A common shell idiom is [ x"$1" = x"" ]. [131039360080] |It avoids any risk of having operands parsed as operators because no operator starts with a letter. [131039360090] |In ksh, bash and zsh, you can use the double bracket syntax, [[ -z $1 ]]. [131039360100] |This newer syntax (it's from the late 1980s rather than the mid-1970s) eliminates the risk of having operands parsed as operators by using a special syntactic construct rather than an ordinary built-in. [131039360110] |Operators must appear literally, unquoted within the double brackets, and you don't need to double quote variable expansions. [131039370010] |sudo missing on Palm WebOS - can I add it? [131039370020] |I've rooted my Palm Pre (WebOS 1.4.5) and installed a SSH server on it. [131039370030] |Now I'd like to SSH into it with an unprivileged user and use sudo for elevated commands. [131039370040] |However: There is no sudo on the system. [131039370050] |Can I add it somehow? [131039380010] |I found the way to do it (was rather easy after all): [131039380020] |
  • prerequisite: Preware is installed (done this already)
  • [131039380030] |
  • install ipkg-opt (called the "Optware Advanced Linux Command Line Installer" in Preware)
  • [131039380040] |
  • connect to your device command line as root (via novaterm/USB cable or SSH, if already installed)
  • [131039380050] |
  • call ipkg-opt update
  • [131039380060] |
  • call ipkg-opt list | grep sudo to make sure the package is available
  • [131039380070] |
  • call ipkg-opt install sudo
  • [131039380080] |At which point my device did: [131039380090] |I assume an unpriviledged user has already been added to the system. [131039380100] |Now modify /opt/etc/sudoers: [131039380110] |
  • chmod 640 /opt/etc/sudoers (make writable for root)
  • [131039380120] |
  • add permission for that user to the file (username ALL=(ALL) ALL)
  • [131039380130] |
  • chmod 440 /opt/etc/sudoers (make readlony again)
  • [131039380140] |Now SSHing into the device as the unprivileged user and using sudo from there should work. [131039390010] |zsh autcomplete not updating path executables [131039390020] |Possible Duplicate: rebuild auto-complete index (or whatever it's called) [131039390030] |After I install something via aptitude, zsh won't be able to find it until I search for it with which. [131039390040] |For example [131039390050] |This naturally is pretty annoying. [131039390060] |Is there a zsh setting for this? [131039390070] |Or is it a bug? [131039390080] |Or what? [131039400010] |The caps lock and scroll lock lights are flashing and everything is frozen. [131039400020] |I was using a small Linux distro that was running X11 with JWM as its window manager. [131039400030] |I was browsing a directory when all of a sudden the mouse froze and the keyboard became unresponsive. [131039400040] |The caps lock and scroll lock lights are flashing. [131039400050] |What does this mean... and is there anything I can do about it? [131039410010] |The kernel has crashed your pc, the reason could be anything... [131039410020] |Good question is how you collect the crash data, so you know what crashed it. [131039410030] |But the only thing to do is to reboot the pc. [131039420010] |How can I scroll within the output of my watch command? [131039420020] |I use the watch command to see the contents of my directory changing as a script runs on it (via watch ls dir/) [131039420030] |It's a great tool, except that I can't seem to scroll down or up to see all of the contents once the number of entries fills the vertical length of the screen. [131039420040] |Is there a way to do this? [131039430010] |watch is great, but this is one of the things it can't do. [131039430020] |You can use tail to show the latest entries: [131039440010] |list full path of file without tying it from relative relative path [131039440020] |I could do this [131039440030] |but I'd like to do this [131039440040] |is it possible? [131039450010] |You will have to use find and pwd. [131039450020] |Something like: [131039450030] |OR [131039450040] |From this answer: [131039450050] |You could use the $PWD variable to cut out unwanted subshells: [131039450060] |See Also: [131039450070] |
  • How can I list files with their absolute path in linux?
  • [131039450080] |
  • ls -R --fullpath | grep filename
  • [131039460010] |Using find in combination with pwd is a fine answer but it creates two subshells and isn't necessary. [131039460020] |There is a command which will do what you want: [131039460030] |readlink -f .htaccess [131039460040] |

    Output

    [131039470010] |realpath exists for this purpose (finding canonical absolute paths): [131039480010] |CH3MNAS Fun Plug and NZBget. Cannot launch NzbGet 0.7, word unexpected [131039480020] |Hello, [131039480030] |I have un-tarred the nzbget0.70 debug version and have put it in the /ffp/bin dir. and I have a config file in the /ffp/etc/ dir [131039480040] |but when I try to run it, I get the following: [131039480050] |I used this how-to: http://www.aroundmyroom.com/2009/01/27/the-how-to-that-replaces-all/ [131039480060] |I used this tar: nzbget-0.7.0-bin-dns323-arm-debug.tar.gz from http://sourceforge.net/projects/nzbget/files/ [131039480070] |what did I do wrong? [131039480080] |ps. [131039480090] |I logged in as root [131039490010] |nzbget is a binary file; you can't use sh to process it, you would do that if nzbget were a shell script. [131039490020] |Running just nzbget didn't work because by default the current directory is not on the PATH, so you need to do something like: [131039490030] |Or: [131039500010] |Using text from previous commands' output [131039500020] |I know this is not how terminals work, but I find myself often wishing there was an easy way of using text (copying it, modifying it, etc) that is already on my terminal window history from some previous command output. [131039500030] |I've imagined it like this: [131039500040] |I'm at my bash shell about to enter a command and I realize I need to type something that is already on the screen a few lines above. [131039500050] |I can reach for the mouse and select it, but I hate that. [131039500060] |What I really wish at this moment is that my whole terminal became a giant Vim buffer, all of it: the prompts, the commands I've typed, and the output from those commands. [131039500070] |Everything. [131039500080] |And then I could jump a few lines up, maybe edit a little, and yank some text. [131039500090] |Then, return to bash shell mode and paste whatever I yanked into the current command line. [131039500100] |I know there are some options: [131039500110] |
  • Naturally, pipe the text I need directly from some command's output to a file, or right to the next command. [131039500120] |But for that you have to plan in advance or re-execute the command from your history (supposing the output won't change between invocations).
  • [131039500130] |
  • To reach for the mouse and select the text to copy. [131039500140] |Lame, annoying, I wish I hadn't to do that, (this isn't the GUI stackexchange site, is it? =)
  • [131039500150] |
  • To use my (Mac OS X) Terminal application's "Export text as..." option, and save the terminal text to a file. [131039500160] |Then open the file and work with it. [131039500170] |Almost, but why go through the step of saving to a file on disk?
  • [131039500180] |So the question is: What do you do in this situation? [131039500190] |Is there any trick, tool, shell feature that could be useful in this scenario? [131039500200] |Do you just resign yourself to use the mouse or re-type whatever you need to type? [131039500210] |It might be that the problem is that I'm not doing things 'the unix way', but c'mon, this is 2010 and I cannot easily copy some text a few lines away from my cursor? [131039510010] |You could use script (use -f to make sure it flushes every time) and you can have all the output in a file. [131039510020] |Use a terminal multiplexer (screen, tmux etc). [131039510030] |For instance, for screen http://web.mit.edu/gnu/doc/html/screen_11.html# [131039510040] |Command: copy Enter copy/scrollback mode. [131039510050] |This allows you to copy text from the current window and its history into the paste buffer. [131039510060] |In this mode a vi-like full screen editor is active, with controls as outlined below. [131039510070] |Other terminal multiplexers surely have more advanced capabilities. [131039520010] |In emacs, M-x shell handle edition of previous output. [131039520020] |You can easily copy paste previous commands output, or any part of your buffer, like a regular buffer. [131039520030] |I am pretty sure there should be a vim equivalent. [131039530010] |In tmux (a terminal multiplexer) you can press Ctrl+A-[ to switch to the Vi buffer mode where you can navigate around the screen, scroll the buffer back and forth, copy text etc. [131039530020] |The default shortcut is actually Ctrl+B-[ but that was obviously meant to be reconfigured. [131039530030] |Also, the default buffer mode is Emacs but you can configure it to be Vi. [131039530040] |Check out tmux, it really is a great modern terminal multiplexer. [131039530050] |Besides working with buffer you can split screen in multiple windows, connect to the same session from multiple terminals etc. [131039530060] |For ultimate convenience you can even make it your login shell if you tell it what your actual shell is. [131039530070] |On OpenBSD tmux was even made part of the base system. [131039530080] |See man page for tmux for more details. [131039530090] |Also see screenshots on http://tmux.sourceforge.net/ [131039540010] |As mentioned here, the Emacs' eshell could be your default term+shell. :) Then you'd use the usual text navigating keys there as a minimum. [131039540020] |If you learned more special keys, then the following features of Emacs' eshell can be accessed: [131039540030] |
  • navigating, i.e. jumping, between the past cmd prompts,
  • [131039540040] |
  • marking and narrowing to the output of previous cmds,
  • [131039540050] |
  • substituting the output of the last cmd for a special shell var in the cmd prompt.
  • [131039550010] |How to run commands automatically on gnome-terminal after log-in? [131039550020] |After each login, there's certain commands that I run on specific tabs of gnome-terminal. [131039550030] |This is a tedious process, so can this be done automatically? [131039560010] |Yes, there is a way. [131039560020] |You need to tell gnome-terminal to launch tabs with certain profiles; these profiles must be setup to start a shell with the commands you want. [131039560030] |First, you need to make a script (or a launcher icon) that will start gnome-terminal --tab-with-profile=Dev. [131039560040] |"Dev" is the name of a profile you will create, so replace that with whatever you want it to be. [131039560050] |Also, you can specify as many --tab-with-profiles as you want: it will open a tab for each. [131039560060] |Now, you need the profile you just referenced. [131039560070] |This is created by opening gnome-terminal, and finding Edit->Profiles... in the menu. [131039560080] |Make a new profile and give it the name you specified in the previous step. [131039560090] |Next, you need to set its preferences. [131039560100] |Highlight the newly created profile and click the Edit button. [131039560110] |When the Profile Preferences dialog is up, activate the "Title and Command" tab, check "Run a custom command..." and in the associated textbox, put sh -c "ENV=$HOME/.dev_profile sh". [131039560120] |Of course, you can set ENV to any path you want, as long as you are consistent in the next step. [131039560130] |This starts sh, and sh will execute whatever commands are in $HOME/.dev_profile [131039560140] |Next, you need to create that shell profile file. [131039560150] |So edit $HOME/.dev_profile (or whatever file you specified in the previous step). [131039560160] |Place whatever commands you want in there; they will be executed when the shell is started. [131039560170] |Treat this like you would a .bashrc - this will replace it. [131039560180] |Depending on how your .bashrc is setup, you may want to source $HOME/.bashrc in the profile to copy all the functionality over from your normal sh profile. [131039570010] |You can start multiple commands on the same gnome-terminal command line by specifying the --tab-with-profile option multiple times, followed each time by a single -e specifying what command to run in that tab. [131039570020] |You can also use --window-with-profile to have multiple windows. [131039570030] |For example, the following command starts two windows with two tabs each; the first window runs bash in each tab, setting the environment variable TAB to 1 or 2; the second window runs htop in one tab and iotop in the other tab. [131039570040] |The explicit sh invocation, with correct quoting, is necessary for some reason. [131039570050] |If you want a command to run when you log in, put it in a shell script (for example ~/bin/my_gnome_login_commands, and register it in “System / Preferences / Startup Applications” in the Gnome menu. [131039570060] |Alternatively, create a file ~/.config/autostart/my_commands.desktop containing [131039570070] |(You must use the full path to your home directory on the Exec= line, you can't use ~.) [131039570080] |(This answer has been tested with Gnome 2.30 on Ubuntu 10.04. [131039570090] |As Gnome sometimes breaks compatibility, it may or may not apply to other versions.) [131039580010] |ssh + http proxy (corkscrew) not working [131039580020] |Hi, I'm trying to get ssh working with an http proxy via corkscrew. [131039580030] |Here is the error: [131039580040] |Here is the content of ~/.ssh/config [131039580050] |I would bet the ProxyCommand is not found, but /usr/bin/corkscrew is working, its also in my PATH. corkscrew without path is also not working. [131039580060] |[echox@archbox:~] % corkscrew http-proxy.some.proxy.tld 8080 some.ssh.host.tld 22 works fine invoked directly. [131039580070] |Any idea? [131039590010] |I have no idea about corkscrew, but I have had success with ssh though http proxy using putty. [131039590020] |Maybe it can do the work instead? [131039590030] |Edit putty is a X based program so it can't help in all cases. [131039590040] |But if you would like to install putty on something like ubuntu you can just apt it. [131039600010] |Ok, I totally forgot about -vvv :-) Here is the output: [131039600020] |The key is the line with ssh_connect: needpriv 0. [131039600030] |I forgot to add my user to the network group in /etc/group. [131039600040] |The connection worked with root and after adding the user to network it works also for him now. [131039600050] |Connections without corkscrew did work before. [131039600060] |Does anybody have an idea where this "security" setting is stored? [131039600070] |I can't find anything in the arch linux wiki, /etc/, man ssh and the corkscrew source / corkscrew documentation which checks for the network group. [131039610010] |Might be a stupid thing to check but corkscrew depends on 'netcat' being in your path. [131039610020] |Most systems of course have netcat installed by default but a few of them don't have the main binary 'nc' linked to 'netcat' as well. corkscrew depends on being able to call 'netcat', not 'nc'. [131039610030] |At least I had the same symptoms you're describing and simply symlinking of nc to netcat fixed it. [131039620010] |Pidgin will not transfer files. [131039620020] |Anytime someone tries to send me files over YIM or AIM when I am using Pidgin 2.7.5 on Arch Linux, it fails, telling me that they cancelled, and telling them that I cancelled. [131039620030] |The same computer using Pidgin on Windows manages to transfer these files successfully. [131039620040] |Is there some sort of checklist for these issues? [131039620050] |(Previously asked on SuperUser, where it didn't have any answers, and I recently put a bounty on it here: http://superuser.com/questions/172551/pidgin-will-not-recieve-files) [131039630010] |Why would I keep home directories in /var/home ? [131039630020] |As far as I understand, the traditional place for home directories is beneath /home. [131039630030] |Some Linux variants seem to keep them in /var/home, what's the reason for that? [131039640010] |I never seen that... [131039640020] |But you can place things more or less all over the place, I mean one user can be in /var/home/ another in /home/ and third in /partyplace/home/.... [131039640030] |But it just don't make any sense to me, it's better to follow the convention that users data is stored under /home/ [131039650010] |/var might be on a different partition or disk. [131039660010] |My guess is that WebOS is designed to be installed on two different filesystems, a root filesystem that is read-only in normal operation and a filesystem mounted on /var that is read-write in normal operation. [131039660020] |Since home directories need to be writable, they are placed somewhere under /var. [131039660030] |This kind of setup is fairly common on unix systems that run off flash (such as PDAs¹ and embedded unices). [131039660040] |While /home is mentioned by the Filesystem Hierarchy Standard on Linux and is generally common amongst unices, it is not universal (the FHS lists it as “optional” and specifies that “no program should rely on this location”). [131039660050] |Sites with a large number of users sometimes use /home/GROUP/USER or /home/SERVER/USER or /home/SERVER/GROUP/USER. [131039660060] |And I've seen directories rooted in other places: /homes, /export/home, /users, /net, ... [131039660070] |In fact, a long long time ago, the standard location for home directories was /usr. [131039660080] |¹ For example Android (not a unix, but running on a Linux kernel) has a read-only root filesystem and a writable filesystem on /data. [131039670010] |Disk quota exceeded; truncate not bringing quota back down [131039670020] |Resolved: See "However" at the end of the question for details. [131039670030] |I've managed to hose my login to a Unix box. [131039670040] |I don't have an easy way of contacting the administrator, so I'd like to resolve it myself ideally. [131039670050] |I don't have root access (that would be too easy). [131039670060] |Per the title, I've managed to create a large file through an app spamming stdout, which I now can't remove. rm -f doesn't work, nor does cat /dev/null >| $file, nor truncate -s 0 $file. [131039670070] |Errors are akin to the following, for everything I've tried. [131039670080] |Output from quota is unhelpful: [131039670090] |I'm at a loss on what to do next. [131039670100] |Google only gave me truncate and cat \dev\null, so any advice or suggestion would be gratefully received. [131039670110] |Output requested in the comments: [131039670120] |However: I'm not sure what happened, but when I logged in to get the details Gilles requested in the comments, I tried an rm, which worked just fine. quota -v is now producing no output, either. [131039670130] |I've no idea whether this is due to some admin intervention or some other cunning trickery, but it all appears sorted now. [131039680010] |I don't really know, why the commands you mentioned would fail, but you could try [131039680020] |This tells the shell to truncate the file to 0 length without spawning another process. [131039690010] |How do I install opkg installer on a system that doesn't have it. [131039690020] |The system I am using has no package installer in the root fs image. [131039690030] |How do I install opkg itself? [131039690040] |I can't find a link to the x86 binaries. [131039700010] |There aren't x86 binaries, because it's really aimed at embedded devices and the like. [131039700020] |You'll have to compile the source yourself: http://code.google.com/p/opkg/source/browse/ [131039700030] |It does beg the question though... [131039700040] |If it's an x86 machine, why not just use apt/dpkg? [131039710010] |Multi-monitor Xorg nVidia on Ubuntu 10.10 without root? [131039710020] |Here at work we just set up a Ubuntu Terminal Server, All of our workstations have multiple monitors, of different sizes, 2-3 screens and varying nVidia video card models, but they all are nvidia. [131039710030] |I at least have been using Twin-View. [131039710040] |But I don't want to have to log in every day and set up Twin-View with nvidia-settings first thing. [131039710050] |Is there a way to set up Twin-View from a configuration file? or possibly a startup script? [131039710060] |I don't have root access to the TS, however if we could find a one size fits all xorg.conf configuration that makes multi-monitor just work on every machine then I might be able to convince the TS admin. [131039720010] |As described in the “Loading Settings Automatically” section of the nvidia-settings(1) man page you can save the nvidia-settings configuration to a user config file and then call nvidia-settings to load it automatically from your .xinitrc, .xsession, or whatever your desktop session startup uses. [131039730010] |File Hierarchy System Specification - Version 2.3 [131039730020] |This standard consists of a set of requirements and guidelines for file and directory placement under UNIX-like operating systems. [131039730030] |The guidelines are intended to support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems. [131039740010] |File Hierarchy Syste [131039750010] |How do I view files as a tree structure? [131039750020] |I would like to view files exactly like I can view proccess using pstree. [131039750030] |So is there something like: [131039750040] |that would give me what I need? [131039760010] |You can use the tree utility for this: [131039770010] |How can I grep through a bunch of directories with an @ in the name? [131039770020] |I've tried this. [131039770030] |Here's what the directory looks like. [131039770040] |They're log files from pidgin. [131039780010] |Doh! [131039780020] |I just realized after asking my problem is not the @ it's the - this works [131039790010] |Why does truncate fail for sizes above 2043G in ext3? [131039790020] |Why does this work: [131039790030] |...while this fails: [131039790040] |Why 2043 gigabytes, of all values? [131039800010] |You're probably using a filesystem that has a 2TB maximum file size (for example, ext3 with a 4KB or 8KB block size). truncate won't let you specify a target file size greater than the maximum your file system supports; 2044GB is very close to 2TB. [131039800020] |I'm not sure why it's not exactly 2048GB that causes the problem; it's probably something internal to ext3's method of storing files [131039810010] |That's a limitation of the file system you use. [131039810020] |Here ZFS creates a whopping 7 Exabyte sparse file: [131039820010] |How can I move to the bottom of a document using the pico editor? [131039820020] |How can I move to the bottom of a document using the pico editor? CTRL+V is next page; is there a similar key for last page? [131039830010] |Press CTRL+_ and instead of entering a line number hit CTRL+V. [131039840010] |Python MySQLdb compatible with which versions of Python? [131039840020] |I was reading on MySQLdb Python page and cannot seem to find any version information. [131039840030] |Currently I am developing with this MySQLdb module using python 2.5, 2.6 but was wondering if it is compatible with versions such as Python 2.3.4. [131039840040] |In the readme accompanying the download from sourceforge it says: "Versions lower than 2.3 WONT WORK" and this language always worries me. [131039840050] |Did they mean lower than 2.3 or lower than or equal to 2.3? [131039840060] |Any ideas where I can determine this information? [131039840070] |Perhaps this question is better asked on superuser? [131039850010] |Difference between nohup, disown and & [131039850020] |What is the difference between [131039850030] |and [131039850040] |and [131039860010] |Using &causes the program to run in the background, so you'll get a new shell prompt instead of blocking until the program ends. nohup and disown are largely unrelated; they suppress SIGHUP (hangup) signals so the program isn't automatically killed when the controlling terminal is closed. nohup does this when the job first begins. [131039860020] |If you don't nohup a job when it begins, you can use disown to modify a running job; with no arguments it modifies the current job, which is the one that was just backgrounded [131039870010] |How to make a distro be fullscreen on VirtualBox? [131039870020] |I'm running Debian as host and I fail to get Debian, Ubuntu, and Fedora guests to be fullscreen. [131039870030] |Changing video memory didn't help. [131039870040] |NOTES: [131039870050] |
  • I haven't tried any other distro
  • [131039870060] |
  • I didn't create WinXP guest myself
  • [131039880010] |There are a few things that need to be in place. [131039880020] |You need to have guest additions installed on the guest OS. [131039880030] |You have to have "guest display auto-resize" feature enabled (Host+g toggles this), and then you have to actually activate full-screen mode (Host+f toggles this). [131039890010] |How do I send stdin to the clipboard? [131039890020] |Is there functionality in unix that allows for the following: [131039900010] |There are a couple tools capable of writing to the clipboard; I use xsel. [131039900020] |It takes flags to write to the primary X selection (-p), secondary selection (-s), or clipboard (-b). [131039900030] |Passing it -i will tell it to read from stdin, so you want: [131039910010] |Dragging of windows slow/laggy after installing ATI Radeon HF 4870 graphic drivers in Ubuntu 10.10 Gnome? [131039910020] |I recently installed ubuntu 10.10. [131039910030] |Graphics were actually fine, by just using the default drivers, and I could drag windows fast and smoothly. [131039910040] |However, since I have an ATI Radeon HD4870 card I should be able to get more out of it by installing the ATI driver for linux. [131039910050] |So, I decided to install the ATI drivers for my radeon 4870. [131039910060] |Now my resolution is fine and everything looks alright, but when I drag windows its not smooth at all and very laggy. [131039910070] |I haven't seen any other implications, although I didn't test any graphic heavy applications. [131039910080] |The window dragging is very laggy though and thus a problem. [131039910090] |Any idea what could cause this? [131039920010] |If possibly your window manager (Compiz, possibly?) or window decorator (possibly Emerald) is using a fancy effect like alpha blur, that might cause this problem. [131039930010] |It could be that you already had the fglrx (ATI proprietary) driver installed, and what you just installed is the open source driver. [131039930020] |The open source driver is generally slower than the proprietary driver. [131039930030] |It could also be the reverse - if the open-source driver was compiled with DRI (direct rendering infrastructure) support but the fglrx driver was not, it can be slower. [131039940010] |If dragging windows is laggy (scrolling is probably fine), it usually means you're on software rendering and you didn't install those drivers properly. [131039950010] |How can I tell if my hard drive is PATA or SATA? [131039950020] |I have an ATA hard disk in my laptop, running Fedora 11, kernel 2.6.30.10-105.2.23.fc11.i586. [131039950030] |I am looking to upgrade the disk in here (would love to get an SSD) but I forgot if it's a serial ATA or an old parallel ATA interface. [131039950040] |There's not much use upgrading to an SSD if it's PATA... [131039950050] |How can I tell if the disk is connected via a PATA or an SATA interface? [131039960010] |Update: For the record, @Giles answer is better. [131039960020] |If it's a PATA (ide) drive, then you will see it under /proc/ide. [131039960030] |Here is my IDE DVD-Drive, for example. [131039960040] |If it is a SATA drive, it will show up under /proc/scsi. [131039960050] |You might be surprised to find it under 'scsi'. [131039960060] |I forget the exact reason (I'm going to ask that in another question), but I think that is because SATA uses the SCSI drivers. [131039960070] |Here's a list showing a SATA drive on my system: [131039970010] |To see the device description for the controller (assuming an internal (PCI) controller), which usually contains SATA for SATA controllers: [131039970020] |If you want to type less, just browsing the output of lspci is likely to give you the answer in a laptop (many desktop have both kinds of interfaces so you'd have to look up the drive you're interested in). [131039970030] |If that doesn't give you the answer, to see what driver is providing sda (you can then look up whether that driver is for a PATA or SATA controller): [131039980010] |How can I setup Apache on Linux to stream WMV-HD to Xbox 360? [131039980020] |What I am looking for is a free and open source solution. [131039980030] |If the distro I use matters, it is Open SUSE. [131039980040] |VLC supports only WMV1&2. [131039990010] |Look up DLNA. [131039990020] |I don't know what packages on OpenSUSE would provide it, but it's your best bet. [131039990030] |Under Ubuntu, DLNA is provided by the package Rygel (although there is a plug in for Rhythmbox called Coherence). [131040000010] |Why do my SATA devices show up under /proc/scsi/scsi ? [131040000020] |I have 3 SATA devices on my system. [131040000030] |They show up under /proc/scsi/scsi, although these are not SCSI devices. [131040000040] |Why do my SATA devices show up under the SCSI directory? [131040010010] |They show up as SCSI devices because the drivers speak SCSI to the next kernel layer (the generic disk driver). [131040010020] |This isn't actually true of all SATA drivers on all kernel versions with all kernel compile-time configurations, but it's common. [131040010030] |Even PATA devices can appear as SCSI at that level (again, that depends on the kernel version and kernel compile-time configuration, as well as whether the ide-scsi module is used). [131040010040] |It doesn't really matter whether the driver speaks SCSI to the physical device, though it in fact does: ATAPI (the current version of ATA, protocol-wise) reuses the SCSI protcol anyway, so even IDE devices (since 15–20 years) are speaking SCSI to the controller. [131040010050] |The separate ide interface inside the kernel is more of a historical survivance. [131040010060] |You'll notice that USB disks also appear as scsi, for the same reason (and they speak SCSI too on the USB bus). [131040010070] |The same goes for Firewire. [131040020010] |Enabling Wireless in Fedora [131040020020] |I have been using Fedora 13 for around 6 months and its really working well. [131040020030] |I have an internet connection in my room (wired connection) and i have been using it for quite sometime. [131040020040] |I have not used WiFi connection at all and in order to check whether WiFI is working properly or not, i took my laptop to my college library which is Wifi enabled. [131040020050] |Unfortunately, when i turn on my system its not detecting any wireless network at all. [131040020060] |What is the problem and how should i rectify this? [131040030010] |Can't get the '-o remount' option on an NFS share to work in Slackware 13.1 [131040030020] |I've had rsnapshot working under Slackware 13.0 for a few months. [131040030030] |In my /etc/rsnapshot_ scripts I first have it run is mount -o remount,rw then the very last thing it does is a mount -o remount,ro. [131040030040] |The reason behind this is to protect my backups from accidental deletion by making them read-only whenever they are not actively being created. [131040030050] |When I upgraded to 13.1 this -o remount functionality seems to have either disappeared or broken: [131040030060] |Does anybody have a proposed solution to remedy this? [131040040010] |This doesn't exactly answer your question, but I'd advise against using rsnapshot over NFS. [131040040020] |You are negating the primary benefit of rsync which is the ability to transfer a small amount of data over the network to detect large portions of identical data. [131040040030] |Rsync is designed to run over ssh where it can invoke an rsync server of the other side of the connection and communicate with it via it's own optimized protocol that uses a rolling checksum to identify identical data. [131040040040] |When rsync is run over NFS and it thinks the file might be different due to timestamps or size, it must download the entire file over NFS even if it's only a small change since it has no way of querying the remote side for checksums across the data. [131040050010] |How to mount a disk on key on Red-Hat 6 (Beta) after ejecting it [131040050020] |Hello experts, [131040050030] |I have a Cruzer disk on key which was recognized automatically by my RedHat6-beta OS. [131040050040] |I removed it always using the GUI option "Safely remove", but one time by mistake I selected "Eject" and from then it is not recognized. [131040050050] |I tried to mount it according to some Google's results, including editing the fstab file - no success. [131040050060] |Does someone have an idea how to solve that? [131040050070] |Thank you all in advance. [131040060010] |Got it after all. [131040060020] |First - run dmesg command -- a new SCSI device is recognized (usually sdb1). [131040060030] |Then mount /dev/sdb1 . [131040060040] |Hope that it will help someone. [131040070010] |easy-to-use image-editing FLOSS [131040070020] |I'm looking for recommendations for something easy to use that will allow me to edit my images. [131040070030] |I fiddled with Inkspace and GIMP and found them not so intuitive (I don't want to have to read a manual to know how to draw a border around some part of the image). [131040080010] |Try Pinta, its interface is pretty similar to MS Paint's. [131040090010] |Well, some of the programs include: [131040090020] |
  • f-spot
  • [131040090030] |
  • gthumb
  • [131040090040] |They are easy to use but their options are pretty poor comparing to Inkscape or Gimp (even though Inscape is used for vector editing and Gimp for raster). [131040090050] |Have in mind that Gimp is worth trying a little harder because when you master its basic functions, it will boost your image editing on a daily basis. [131040090060] |If you dont like current UI of Gimp, use v2.7 to get single window UI (like Photoshop). [131040090070] |Here's a bunch of nice and free PDFs for Gimp usage and stuff. [131040100010] |It depends what exactly do you need to do with your images... [131040100020] |If you want something like mspaint you could try Kolourpaint (from KDE)... [131040100030] |If you want process photos (change brightness/crop/etc) I'll recomend Digikam - it is digital album (like f-spot or picasa mentioned before) with lot of tools to change properities of photos - crop, change brightness, denoise etc... [131040110010] |What to use to backup files, preserving ACLs? [131040110020] |When using the tar utility to store files in backups one loses the extended ACLs. [131040110030] |Is there some commonly used and not hackish solution (like: create a script that will recrate the ACLs from scratch) to preserve the ACLs? [131040120010] |From the tar Man Page. [131040120020] |-p, --same-permissions, --preserve-permissions ignore umask when extracting files (the default for root) [131040120030] |It is not actually the act of archiving that alters the access permissions(ACLs), but the act of unpacking them. [131040120040] |Tar is very often used to distribute files from one user to another, and so it is thought convenient to apply a users umask when they unpack. [131040120050] |To preserve the files previous permissions, simply ad a p for to your options. [131040120060] |For example [131040120070] |Straight tar: [131040120080] |bz.tar: [131040120090] |gz.tar: [131040130010] |Actually, I believe the question was not about the (standard) file permission bits, but extended ACL information (see setfacl(1) or acl(5)). [131040130020] |To my knowledge, the unmodified GNU tar ignores ACL information. [131040130030] |(The man page for GNU tar 1.15.1 as shipped with RHEL 5.2 mentions switches --acls and --no-acls, but I haven't gotten them to work.) [131040130040] |However, the star program is able to back up and restore ACLs, if you select the exustar format: [131040130050] |Star home page: http://cdrecord.berlios.de/new/private/star.html Star is available in ubuntu, at least. [131040140010] |If you're looking for a simple-to-use yet powerful solution, I'd recommend rdiff-backup. [131040140020] |Basically, it makes a copy of a source directory to a destination directory but it also saves additional information so you can go back in time to whenever you want. [131040140030] |And, of course, it preserves symlinks, special files, hardlinks, permissions, uid/gid ownership and modification times. [131040150010] |Which minimal but extendable Linux distribution to choose [131040150020] |I need an extendable Linux distribution which I can easily reduce in size so much that it fits into a 64 mb CF card. [131040150030] |In this stripped version it will run on a Via C7 and needed is Kernel, networking, a shell, basic perl and a ftp server. [131040150040] |There are some distributions for embedded systems which can do this, however I have the requirement that it should be possible to to expand this set in the future, e.g. to a basic X setup or python instead of perl etc. [131040150050] |Which distribution do you know which can do this? [131040150060] |Can any one of the major distributions like Fedora, Debian, Ubuntu be stripped down so much? [131040150070] |Edit: I looked at Embedded Debian which seems pretty close to what I need. [131040150080] |Sadly, development seems to have stalled due to health problems of the main maintainer. [131040160010] |Damn Small Linux is the only off-the-shelf 50MB distribution that I know of. [131040160020] |It is vaguely debian-ish so one can use apt and friends if needed. [131040170010] |Let's check a few figures for mainstream distributions (i386 binaries): [131040170020] |
  • Debian lenny: cdebootstrap -f minimal lenny lenny-minimal produces 77MB. [131040170030] |Add ~30MB for the package lists. [131040170040] |About 9MB is documentation (/usr/share/doc, /usr/share/man), and about 25MB is locale data; you can remove these (but upgrades will bring the files back). [131040170050] |This includes a minimal Perl setup (add 29MB for the standard library). [131040170060] |There's no editor (add 2MB for nvi or nano), and no ssh daemon (add 17MB for OpenSSH, 11MB for lsh). [131040170070] |Basic FTP daemons start under 1MB.
  • [131040170080] |
  • NetBSD 5.1 starts at about 84MB (about 33MB in a tar.gz) for a kernel plus the base system, which includes a comprehensive network suite (ftpd, sshd, postfix, ...), an X server, but no perl or X client. [131040170090] |There's no documentation, but about 10MB of locales.
  • [131040170100] |
  • OpenBSD 4.8 starts at about 160MB (about 60MB in a tar.gz) for a kernel plus the base system (including perl with the full standard library, but no X server). [131040170110] |There are smaller OpenBSD distributions such as Flashdist, though none looks up-to-date.
  • [131040170120] |Going by the BSD figures, compression lets you fit about 120MB of programs in about 50MB of raw storage. [131040170130] |At a 250% gain, you're definitely going to want compression. [131040170140] |Under Linux, you have a few choices of read-write compressed filesystems, in particular Jffs2. I don't know what the possibilities are under *BSD. [131040170150] |If you have a lot of RAM, you don't need to depend on kernel support for a compressed filesystem, you can have a tar.gz or 7z archive that you uncompress into RAM at boot time. [131040170160] |There is a wide range of small Linux distributions, from single floppies to live CDs. [131040170170] |You'd want something in the middle. [131040170180] |Damn Small Linux and Puppy Linux are two popular choices; both run from RAM, and you'll need to remaster Puppy to take away stuff you don't need (the main distribution is too big for you). [131040180010] |After quite some research I settled in the end for slitaz. [131040180020] |I can really recommend it, as I haven't found any distribution which is so flexible. [131040180030] |There is a minimum system (well under 20 mb), basically giving you just a shell and ssh access. [131040180040] |However, there is a huge package repository so you can extend to graphical interface or server daemons etc. [131040190010] |Try tinycorelinux - it's only 10 MB (event less for microcore - the console version). [131040190020] |Beyond the small size, it's also very quick booting [131040190030] |I had a similar situation. [131040190040] |I tried slax, which is similar to siltaz mentioned here, but I found that it still contains many features I don't need, and that it still takes to much time to boot for an embedded device. tinycorelinux is very minimal, but when looking for a minimal system I prefer to start with almost nothing, and add just what I need. [131040190050] |They have a package system that includes many packages, many of the are also minimized and stripped down. [131040200010] |There is emdebian grip, which is binary compatible with Debian but removes all documentation and other files not strictly needed for functionality. [131040200020] |You can mix and match packages with regular Debian, but it should give you a good base system even without doing that. [131040210010] |How to resume a gnome session? [131040210020] |I am in the middle of a project where I repeatedly have to stop X11 to debug. [131040210030] |I stop with Ctrl+Alt+F1 then login and [131040210040] |Then I go about my debugging. then restart gdm with [131040210050] |and I get back the login screen when I login yet again, and execute "users" and get back [131040210060] |making me think that my I'm still logged in form the first time, the terminal without gnome, gnome again, and another instance. [131040210070] |If my old gnome session is still logged in how do I resume that session instead of starting a new one? [131040220010] |Quoth the manual: [131040220020] |The gnome-session-save program can be used from a GNOME session to save a snapshot of the currently running applications. [131040220030] |This session will be later restored at your next GNOME session. [131040220040] |Is that what you are looking for? [131040220050] |From your question it is unclear if you are concerned about saving the state or of truly logging out your gdm. [131040230010] |Depending on your distribution's startup scripts, and perhaps on what you're doing to your X server, stopping the gdm service may or may not kill the X display and the programs running on it. [131040230020] |The easiest way to kill your X server is to press Ctrl+Alt+Backspace. [131040230030] |If that X server is running in a virtual machine, make sure you send the key combination to the VM and not to the host (e.g. under VirtualBox, press Host+Backspace). [131040230040] |This will kill your session and typically will give you a login prompt as gdm restarts the X server. [131040230050] |Make sure you don't have the DontZap option in your xorg.conf (it disables Ctrl+Alt+Backspace; it's off by default). [131040240010] |Can't you just Ctrl+Alt and then hit F-x keys until you get back to GNOME? [131040240020] |That's what I do. [131040240030] |On my ubuntu machine Ctrl+Alt+F7 gets me back to my gnome session. [131040250010] |Amarok2 "Search Collection" shortcut? [131040250020] |Is there a way to make a shortcut for amarok2 that would put focus on the search collection box? [131040260010] |"lshw -C disk" returns but prints nothing [131040260020] |I am using an ubuntu live cd to help me recover some data off of a hard drive. [131040260030] |I used lshw -C disk to find out which device I need to copy, /dev/sda in this case. [131040260040] |I am using ddrescue -n to try and recover some data from a failing hard drive. [131040260050] |It stops at 100GB of a 500GB hard drive. [131040260060] |After it finishes sudo lshw -C disk does not print anything. [131040260070] |The next step in using ddrescue is to use sudo ddrescue -r 1 /dev/sda, but it reports there is no such file or directory. [131040260080] |What is going on; why is lshw failing to report anything? [131040260090] |Edit: Added sudo to relevant places. [131040270010] |Try running lshw as super user. [131040280010] |It looks like the way in which your disk is failing is so bad that the kernel becomes unable to keep communicating with the disk. [131040280020] |There are probably a lot of errors concerning the disk in /var/log/kern.log. [131040280030] |If you post its contents here, people might have tips to help you recover more. [131040280040] |(Post only the part from the first disk error, presumably triggered during the ddrescue -n, to the point where the kernel deactivates sda; if there's a long and repetitive bit in the middle, it's ok to cut the repetitions.) [131040280050] |But don't expect miracles, there's a chance that the last 400GB are simply beyond recovery without spending thousands of dollars on a professional service. [131040290010] |Network monitoring tool. [131040290020] |I'm basically looking for a utility that displays which processes are using how much bandwidth, similar to how top displays which processes use how much resources. [131040300010] |Have a look at ntop.org. [131040310010] |netstat can give you usage statistics on a per socket basis. [131040320010] |NetHogs is the best tool I have found so far that fulfills my need, but sadly needs to be run as root. (via) [131040330010] |I would like to add iptraf to the list. http://iptraf.seul.org [131040340010] |How to check how many lanes are used by the PCIe card? [131040340020] |PCI Express slots on the motherboard can be wider then the number of lanes connected. [131040340030] |For example a motherboard can have x8 slot with only x1 lane connected. [131040340040] |On the other hand, you can insert a card using only for ex. [131040340050] |4 lanes to a x16 slot on the motherboard, and they will negotiate to use only those x4 lanes. [131040340060] |How to check from the running system how many lanes are used by the inserted PCIe cards? [131040350010] |Ok, it seems I missed it on first try in lspci manpages. [131040350020] |lspci -vv displays a lot of information, including link width: [131040360010] |Quick way to host a webserver on localhost [131040360020] |I'd like to make the contents of a folder available at http://localhost:PORT/, temporarily. [131040360030] |A very basic http server. [131040360040] |I already know about, [131040360050] |or ( this seems like the new way ) [131040360060] |but I'm looking for alternate command-line methods. [131040370010] |There is no such thing like a "system' webserver in unix and different "methods". [131040370020] |You can install software on your system which contains a simple webserver and use it or not. [131040370030] |python -m SimpleHttpServer just loads the SimpleHttpServer module, which contains a basic webserver. [131040370040] |Something simliar exists for Perl, just have a look at CPAN: http://search.cpan.org/dist/HTTP-Server-Simple/ [131040370050] |"Simple" is a solution for Java: http://www.simpleframework.org/ [131040370060] |The same can be really easy achived with JavaScript and nodejs: http://nodejs.org/api.html , see the section about HTTP. [131040370070] |Another solution would be to do it yourself: HTTP is a really simple protocol, when it comes to only serve some static files. [131040370080] |To get /foo/bar your browser will request it with: [131040370090] |The reply should be in the following form: [131040370100] |or [131040370110] |Include the Last-Modified header to enable caching of the ressources. [131040370120] |It should be possible to write a minimal implementation of this in a few lines of code. [131040370130] |Tie it to a port and you will have your webserver up and running. [131040370140] |Use inetd or netcat to bind it to your IP. [131040370150] |Edit: Here is a simple shellscript which does exactly this job. [131040370160] |It also supports generating an index for the folders and 404 error handling: [131040370170] |
  • SWS Manpage: http://prd4.wynn.com:8080/src/sws.8.html
  • [131040370180] |
  • SWS Source: http://prd4.wynn.com:8080/src/sws.txt
  • [131040370190] |
  • SWS Project Page: http://prd4.wynn.com:8080/README.html
  • [131040380010] |How is a graphical application started from a bash session connected to that bash session? [131040380020] |When I start a graphical application from a terminal running bash, that application is somehow connected to that bash session. [131040380030] |For example, when the applications dumps some text it will appear in the bash session it is started from. [131040380040] |Also, some applications will get closed when i close the terminal using the close button, but not when i close the terminal by exiting the bash session using the exit command or CTRL+D. [131040380050] |How is a graphical application started from a bash session connected to that bash session? [131040380060] |bonus question: How can I inspect this connection? probably also manipulate? [131040390010] |Your shell starts these programs as child processes. [131040390020] |If you run pstree you can see these relations. [131040390030] |I don't know if/how you can manipulate it. [131040400010] |The application is connected in two ways: to bash, and to the terminal. [131040400020] |The connection to the terminal is that the standard streams (stdin, stdout and stderr) of the application are connected to the terminal. [131040400030] |Typical GUI applications don't use stdin or stdout, but they might emit error messages to stderr. [131040400040] |The connection to the shell is that if you started the application with foo &, it remains known to the shell as a job, as explained in Difference between nohup, disown and &. [131040400050] |When you close the terminal, the shell receives a SIGHUP, which it propagates to its jobs. [131040400060] |When you type exit in the shell, it disowns the jobs beforehand (this is configurable to some extent). [131040400070] |You can sever the shell connection with the disown built-in. [131040400080] |You can't sever the terminal connection, at least not without underhand methods (using a debugger) that could crash the program. [131040410010] |How can I get a full process list in solaris, without truncated lines? [131040410020] |Is there a way to generate a full process listing in solaris, without truncated lines? [131040410030] |I've tried the ps commmand, with the following arguments: [131040410040] |So, those both seem to do what I want, however, further down in the ps man page, I find this: [131040410050] |Which basically says the output is going to be truncated and there is nothing I can do about it. [131040410060] |So, I'm coming here. [131040410070] |Surely other people have run into this problem and maybe even have a way around it. [131040410080] |I'm guessing ps can't do it and so I need to use other tools to do this. [131040410090] |Is that accurate? [131040420010] |The kernel is not required to keep track of command line arguments. [131040420020] |When a process is started through the execve call, the kernel must copy the arguments into the process memory (so that they will be available as argv in a C program, for example). [131040420030] |After that, the kernel can discard the memory used to store the initial command line arguments. [131040420040] |The process is allowed to overwrite its copy of the arguments. [131040420050] |So there may simply be no trace of the arguments. [131040420060] |Some unix variants do keep a copy of the arguments in some form. [131040420070] |Solaris exposes some data in /proc/$pid. [131040420080] |As of OpenSolaris 2009.06, the only trace of the arguments is in /proc/$pid/psinfo, where they are concatenated with spaces in between (so you can't distinguish between foo "one" "two" and foo "one two") and the resulting string is truncated to 80 bytes. [131040420090] |This field in /proc/$pid/psinfo is what ps prints in the args column. [131040420100] |By the way, the -f and -l options control what fields are printed, not whether the fields are truncated to some width. [131040430010] |you could try [131040430020] |this gives you a list of all arguments [131040430030] |or else use an other ps. [131040430040] |If run as root (or any user with enough privileges for that matter) [131040430050] |will give you all arguments. [131040430060] |Its part of SUNWscpu, "Source Compatibility, (Usr)" [131040430070] |HTH Marcel [131040440010] |Depending which PS command you use I use [131040440020] |ps -auxw [131040450010] |Debian sed: can't read [131040450020] |I'm trying to run a command to remove require_once from php files from here (code below), but I am getting the error: [131040450030] |sed: can't read -: No such file or directory [131040450040] |I am in the correct folder; what's the problem? [131040450050] |Update: [131040450060] |If I run: [131040450070] |I get: [131040450080] |find: invalid predicate `-wholename' [131040450090] |I tried this and it returned a list of all the files: [131040450100] |But then changing the original to reflect this: [131040450110] |Gives the error: [131040450120] |-bash: xargs: command not found [131040450130] |FYI, I'm running sed version 4.1.2 and I'm a bit a lost in the command line already, so please explain answers [131040460010] |The error seems to indicate that sed tries to read from stdin. [131040460020] |Have you tried just the find part to see if it returns any file? [131040470010] |What version of sed do you have? [131040470020] |POSIX sed doesn't understand double-dash options, neither does minised, but GNU sed does. [131040470030] |For something more portable (and IMO nicer), try sed -i '/require_once/s![[:space:]]*!&// !'. [131040480010] |Your second invocation doesn't make any sense, why the "\ " before the xarsg? [131040480020] |You try to call a program called " xargs", and bash tells you it can't find it (note the double blank after the colon), which is hardly surprising. [131040480030] |To get rid of the error case where sed hangs when xargs returns zero files (because it's trying to read from stdin when there are no command line arguments), you should add -r to your xargs arguments.