[131007090010] |How can I use DD to migrate data from an old drive to a new drive? [131007090020] |I am upgrading the internal SATA hard drive on my laptop from a 40G drive to a 160G drive. [131007090030] |I have a Linux/Ubuntu desktop which has a SATA card. [131007090040] |I would actually like to do the same thing for a couple CentOS &FreeBSD boxes at work, and it seems this would have the same solution. [131007090050] |I've heard that I can use DD to mirror the 40G partition to the 160G drive, or that I can copy the 40G partition to an images on my local system, and then copy the 40G image to the 160G drive. [131007090060] |Can anyone describe how I may do this? [131007090070] |Do I need any other utilities, such as gpartd [131007100010] |To simply copy the partition, you can use "dd if=/dev/srcDrive of=/dev/dstDrive" or something like this. [131007100020] |I would recomend you to read it's man page. [131007100030] |Sorry I can't give much more info, since I'm at work right now.. [131007110010] |Normally I would suggest a solution such as "hook up the 2nd hard drive using an external enclosure, boot from a linux CD, then use a command such as 'dd if=/dev/sda of=/dev/sdb bs=1G', but since you want to use the same technique for work, I have what may be a better solution. [131007110020] |All of my servers and laptops get imaged at work using Clonezilla. [131007110030] |There are two ways of using it...one of which uses a dedicated server and is probably overkill for you, and another which utilizes a boot CD and external hard drive. [131007110040] |The idea is that you boot in with the Clonezilla CD and have a largish (bigger than the source drive) external USB drive. [131007110050] |Clonezilla walks you through making an image of the existing drive, after which you power down the machine, replace the drive, then boot back into Clonezilla, and it walks you through restoring the data. [131007110060] |This gives you the opportunity to A) put the image on a bigger drive, and B) retain a backup of the data. [131007120010] |One simple example is this: [131007120020] |But if you have some special needs, you really should read the manpage (man dd) or search on Google. [131007120030] |Another idea could be the use of rsync (don't forget to set the right options, like -az [packages the files instead of copy one file after another] or --numeric-ids [uses the uid/gid instead of names like "root"] and maybe some others). [131007120040] |The link contains many examples. [131007120050] |If the new HDD doesn't have partitions, you can use gparted or palimpset. [131007120060] |When you're unsure I would format the HDD and then sync the data with rsync. [131007130010] |Your first task would be to connect both disks to an existing Linux system or connect the new disk to the original system. [131007130020] |You must be very careful since it is very simple to copy the blank disk on top of the good disk! [131007130030] |To end up with the boot sectors and all, you would do something like: [131007130040] |dd if=/dev/hdx of=/dev/hdy [131007130050] |Where hdx is your 40G disk and hdy is your 160G disk. [131007130060] |You will notice there are no partition numbers like /dev/hdx1. [131007130070] |This copies the entire disk, partition info and all. [131007130080] |Your new disk will just like the old disk, 40G allocated. [131007130090] |It should boot right up when placed back in your laptop. [131007130100] |Hope you used LVM? [131007130110] |Otherwise hope you did not use all the partitions? [131007130120] |Getting past this point requires a lot more info. [131007130130] |Another solution is to dump each individual partition. [131007130140] |This requires a lot more situation awareness since you will need to recreate the boot information. [131007130150] |All of this is best used for cloning computers, not upgrading hard disks. [131007130160] |It is much better to restore to a new installation using your backups. [131007140010] |You asked how to do it with dd, but I had better success piping the output of dump into restore. [131007140020] |Given the source ad1s1a and the target ad2s1a: [131007140030] |I tried this on FreeBSD, actually I found it on the FreeBSD Forum [131007150010] |what does the @ mean in ls -l? [131007150020] |I am using Mac OSX. [131007150030] |When I type ls -l I see something like [131007150040] |What do the @'s mean? [131007160010] |It indicates the file has extended attributes. [131007160020] |You can use the xattr command-line utility to view and modify them: [131007170010] |On OSX, this indicates the presence of metadata associated with the file. [131007180010] |I think it means that the file/directory has extended attributes [1]. [131007180020] |[1] http://en.wikipedia.org/wiki/Extended_file_attributes [131007190010] |You may want to have a look at this post in the Apple mailing lists. [131007190020] |It explains that the @ shows that the Finder has extended attributes other than ACL. [131007200010] |It has extended attributes - See the OSX man page here for more information on ls. [131007210010] |In Snow Leopard, at least, you can do this to show more information: [131007220010] |question on \? in regular expression [131007220020] |The following command is used to search for a 7-digit phone number: [131007220030] |What does \? stand for? [131007230010] |It's like ? in many other regular expression engines, and means "match zero or one of whatever came before it". [131007230020] |In your example, the \? is applied to the [ -], meaning it tries to match a space or a minus, but that the space or minus is optional. [131007230030] |So any of these will match: [131007230040] |The reason it's written as \? rather than ? is for backwards compatibility. [131007230050] |The original version of grep used a different type of regular expression called a "basic regular expression" where ? just meant a literal question mark. [131007230060] |So that GNU grep could have the zero or one functionality, they added it, but had to use the \? syntax so that scripts that used ? still worked as expected. [131007230070] |Note that grep has an -E option which makes it use the more common type of regular expression, called "extended regular expressions". [131007230080] |man 1 grep: [131007230090] |... [131007230100] |... [131007230110] |... [131007230120] |Further info: [131007230130] |
  • grep -E option and egrep
  • [131007230140] |
  • GNU grep - Basic vs Extended
  • [131007230150] |
  • Regexp Syntax Summary
  • [131007230160] |
  • Regular Expression - Wikipedia
  • [131007230170] |
  • Why do some regex commands have opposite intepretations of '\' with various characters?
  • [131007240010] |Unfortunately, the exact syntax of regular expressions varies slightly between different programs: grep regexes aren't exactly the same as sed regexes, which aren't exactly the same as Emacs regexes, which aren't exactly the same as C++ regexes, and so on. [131007240020] |To make matters worse, even a "standard" tool like grep can vary slightly between different Unix-like operating systems. [131007240030] |In a regex, some characters have special meaning (such as the square brackets in your example), and revert to their normal meaning as literal characters when you "escape" them by putting a backslash in front of them (so a literal bracket would be written as \[). [131007240040] |Others work the other way around, and only take on special meaning when escaped (e.g. plain n is just a letter, but \n is a line feed). [131007240050] |And these, again, can vary between regex implementations. [131007240060] |In most regex implementations, a question mark means that the previous item is optional, while an escaped question mark (\?) is a literal question mark. [131007240070] |But in a few dialects, it's the other way around. [131007240080] |Your example could make sense either way around, but I suspect you have one of the dialects where ? is a literal and \? is the optional symbol. [131007240090] |So your regex probably means "three digits, optionally followed by a space or dash, followed by four digits". [131007240100] |(Another clue can be seen in constructs like \{3\}, which is clearly intended to mean "exactly 3 of the previous item". [131007240110] |In most regex dialects this would be written {3}, and \{ would be a literal brace.) [131007250010] |Are there any repositories for Fedora 3? [131007250020] |Is there any alive repository for my Fedora Core 3 Linux? [131007250030] |(I want to use the yum command but one error comes and tells that base url is not valid) [131007250040] |Any help to find any alive repository is very much appreciated :) [131007260010] |Yes, there is. [131007260020] |http://archive.kernel.org/fedora-archive/fedora/linux/core/3/i386/os [131007270010] |Safe to delete System.map-* files in /boot? [131007270020] |I'm experimenting with generating some custom kernels using genkernel. [131007270030] |However, each iteration leaves a file in /boot called System.map-genkernel--. [131007270040] |Is it safe to rename and/or delete the System.map-* files? [131007280010] |The System.map file is mainly used to debug kernel crashes. [131007280020] |It's not actually necessary, but it's best to keep it around if you're going to use that kernel. [131007280030] |If you've decided you don't need that kernel, then it's safe to delete the corresponding map file. [131007290010] |non-case-sensitive sed - OpenWrt [131007290020] |the input would be like this in a file: [131007290030] |and the parsing looks like this: [131007290040] |the "/gi" is not good, because OpenWrt doesn't knows it (busybox..): [131007290050] |Does anybody know how could i delete e.g.: the: [131007290060] |with one sed? [131007290070] |Plus (the really thing i need): how could i delete the whole line containing [131007300010] |I can't see a way using Busybox's sed, but you could use Busybox's sh and grep like this: [131007310010] |Does the Busybox not have grep -v? [131007310020] |This should do the job in one simple command: [131007320010] |How to make KDE3 visible for GDM? [131007320020] |I have Gnome (default), KDE4 and KDE3 installed. [131007320030] |When I log out, GDM kicks in and I can select other desktop than Gnome -- among window managers, there is KDE4 (as KDE), but no KDE3. [131007320040] |Question: how to make KDE3 visible for GDM, so I could select it? [131007320050] |openSUSE 11.4. [131007330010] |Found out even more general solution: [131007330020] |http://ubuntuforums.org/showthread.php?t=2920 [131007330030] |it is described how to modify any session manager to "see" any session type. [131007330040] |In my case it was sufficient to copy kde4.desktop to kde3.desktop and change the prefix path from /usr/ to /opt/kde3. [131007340010] |How to deliberately activate cooling fan of laptop? [131007340020] |Hi I have a laptop (Fujitsu Siemens Amilo 4000), I'd like to control the cooling fan manually. [131007340030] |How do I do that? /proc/acpi/fan/ is empty, the fan is otherwise working well. [131007340040] |Distro is Fedora 14. [131007350010] |The fujitsu_laptop module dans control acpi for Fujitsu-Siemens laptops does not appear to have fan control code (as of today) see: [131007350020] |http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=drivers/platform/x86/fujitsu-laptop.c [131007350030] |(You can look at the thinkpad acpi code in the same directory, it has a fan subdriver) [131007350040] |I don't think it's possible to achieve what you want to do with your hardware. [131007360010] |internal speaker not working ubuntu 10.10 [131007360020] |I have a dv6000 laptop . [131007360030] |My speaker and mic were working few days ago . [131007360040] |But now only external speaker works. [131007360050] |Can anyone suggest me some solutions please. [131007360060] |Thanks. [131007370010] |How can I best copy large numbers of small files over scp? [131007370020] |I have a directory that's got several gigabytes and several thousand small files. [131007370030] |I want to copy it over the network with scp more than once. [131007370040] |CPU time on the source and destination machines is cheap, but the network overhead added by copying each file individually is huge. [131007370050] |I would tar/gzip it up and ship it over, but the source machine is short on disk. [131007370060] |Is there a way for me to pipe the output of tar -czf to scp? [131007370070] |If not, is there another easy solution? [131007370080] |My source machine is ancient (SunOS) so I'd rather not go installing things on it. [131007380010] |use rsync, it uses SSH. [131007380020] |Usage: [131007380030] |The rsync switches care about compression and I-Node information. -P displays progress of every file. [131007380040] |You can use scp -C, which enables compression, but if possible, use rsync. [131007390010] |You can pipe tar across an ssh session: [131007400010] |You can run tar on both ends using ssh. scp is part of the ssh family of goodness, so you probably have it on both ends. [131007400020] |There may be a way to work gzip or bzip2 into the pipeline to lessen the network traffic, too. [131007410010] |If you have gzip on both ends: sourcehost$ cd sourcedir &&tar cf - . | gzip -c - | ssh user@destinationhost "cd destinationdir &&gzip -c -d | tar xf -" [131007410020] |If you don't have gzip on the source machine, make sure you have uncompress on the destination: sourcehost$ cd sourcedir &&tar cf - . | compress | ssh user@destinationhost "cd destdir &&uncompress | tar xf -" [131007410030] |This would be faster than first zipping it up, then sending, then unzipping, and it requires no extra disk space on either side. [131007410040] |I sikpped the compression (z) flag on tar, because you probably dont have it on the ancient side. [131007420010] |Tar with bzip2 compression should take as much load off the network and on the cpu. [131007420020] |Not using -v because screen output might slow down the process. [131007420030] |But if you want a verbose output use it on the local side of tar (-jcvf), not on the remote part. [131007420040] |If you repeatedly copy over the same destination path, like updating a backup copy, your best choice is rsync with compression. [131007420050] |Notice that both src and dest paths end with a /. [131007420060] |Again, not using -v and -P flags on purpose, add them if you need verbose output. [131007430010] |Linux Kernel: Good beginners' tutorial [131007430020] |Hi, [131007430030] |I'm interested with modifying the the kernel internals, applying patches, handling device drivers and modules, for my own personal fun. [131007430040] |Is there a comprehensive resource for kernel hacking, intended for experienced programmers? [131007430050] |Thanks, [131007430060] |Adam [131007440010] |Linux Kernel Newbies is a great resource. [131007450010] |I suggest you read "Linux Kernel in a Nutshell", by Greg Kroah-Hartman and "Understanding the Linux Kernel", by Robert Love. [131007450020] |Must reads :) [131007460010] |Linux Device Drivers is another good resource. [131007460020] |It would give you another way to get into the inner workings. [131007460030] |From the preface: [131007460040] |This is, on the surface, a book about writing device drivers for the Linux system. [131007460050] |That is a worthy goal, of course; the flow of new hardware products is not likely to slow down anytime soon, and somebody is going to have to make all those new gadgets work with Linux. [131007460060] |But this book is also about how the Linux kernel works and how to adapt its workings to your needs or interests. [131007460070] |Linux is an open system; with this book, we hope, it is more open and accessible to a larger community of developers. [131007470010] |Linux Kernel 2.4 Internals is another online resource to look at. [131007470020] |It appears to take a pretty 'ground up' approach, starting with booting. [131007470030] |Here the the TOC: [131007470040] |
  • Booting [131007470050] |
  • 1.1 Building the Linux Kernel Image
  • [131007470060] |
  • 1.2 Booting: Overview
  • [131007470070] |
  • 1.3 Booting: BIOS POST
  • [131007470080] |
  • 1.4 Booting: bootsector and setup
  • [131007470090] |
  • 1.5 Using LILO as a bootloader
  • [131007470100] |
  • 1.6 High level initialisation
  • [131007470110] |
  • 1.7 SMP Bootup on x86
  • [131007470120] |
  • 1.8 Freeing initialisation data and code
  • [131007470130] |
  • 1.9 Processing kernel command line
  • [131007470140] |
  • Process and Interrupt Management [131007470150] |
  • 2.1 Task Structure and Process Table
  • [131007470160] |
  • 2.2 Creation and termination of tasks and kernel threads
  • [131007470170] |
  • 2.3 Linux Scheduler
  • [131007470180] |
  • 2.4 Linux linked list implementation
  • [131007470190] |
  • 2.5 Wait Queues
  • [131007470200] |
  • 2.6 Kernel Timers
  • [131007470210] |
  • 2.7 Bottom Halves
  • [131007470220] |
  • 2.8 Task Queues
  • [131007470230] |
  • 2.9 Tasklets
  • [131007470240] |
  • 2.10 Softirqs
  • [131007470250] |
  • 2.11 How System Calls Are Implemented on i386 Architecture?
  • [131007470260] |
  • 2.12 Atomic Operations
  • [131007470270] |
  • 2.13 Spinlocks, Read-write Spinlocks and Big-Reader Spinlocks
  • [131007470280] |
  • 2.14 Semaphores and read/write Semaphores
  • [131007470290] |
  • 2.15 Kernel Support for Loading Modules
  • [131007470300] |
  • Virtual Filesystem (VFS) [131007470310] |
  • 3.1 Inode Caches and Interaction with Dcache
  • [131007470320] |
  • 3.2 Filesystem Registration/Unregistration
  • [131007470330] |
  • 3.3 File Descriptor Management
  • [131007470340] |
  • 3.4 File Structure Management
  • [131007470350] |
  • 3.5 Superblock and Mountpoint Management
  • [131007470360] |
  • 3.6 Example Virtual Filesystem: pipefs
  • [131007470370] |
  • 3.7 Example Disk Filesystem: BFS
  • [131007470380] |
  • 3.8 Execution Domains and Binary Formats
  • [131007470390] |
  • Linux Page Cache
  • [131007470400] |
  • IPC mechanisms [131007470410] |
  • 5.1 Semaphores
  • [131007470420] |
  • 5.2 Message queues
  • [131007470430] |
  • 5.3 Shared Memory
  • [131007470440] |
  • 5.4 Linux IPC Primitives
  • [131007470450] |And, to make it even sweeter, there is a new Linux Kernel Development Third Edition by Robert Love out and Slashdot has a review. [131007480010] |See The Linux Documentation Project. [131007480020] |Particularly the "Linux Kernel module guide". [131007490010] |cshrc execute bashrc within itself? [131007490020] |My school has our Linux accounts using csh/tcsh by default. [131007490030] |I, however, have a lot setup on my home bashrc and I'd like to use that at school. [131007490040] |BUT there's also some important stuff that happens in our cshrc so I'd sort of like to not change my shell on each login. [131007490050] |Is there a way for me to call or execute my bashrc within my cshrc and get the same effects or should I find out how to translate my bashrc into cshrc? [131007490060] |I don't know crazy of an idea this is - I'm only really used to bashrc personally. [131007490070] |Thanks for any help! [131007490080] |Edit: I've decided to translate my cshrc into a bashrc so I can use bash... [131007490090] |Ick Csh - anyone have input on translating this? [131007490100] |My cshrc I'm looking to work on probably later today: [131007500010] |The .cshrc and .bashrc files are written in the language of the shell itself, and the two languages are not compatible. [131007500020] |Further, the things you typically put into these files are commands to affect the startup behavior of the shell, so running one shell from the other will only help to a limited degree. [131007500030] |You're going to have to translate one of the files to the other syntax if you want features from both the site .cshrc and your home .bashrc. [131007500040] |If you'd rather convert the site .cshrc to work under Bash than the reverse, you can switch your shell permanently on that machine with this command: [131007500050] |The other option is to translate your home .bashrc to C shell syntax and add it to the .cshrc file. [131007500060] |I wouldn't recommend this since Csh Programming [is] Considered Harmful. :) [131007510010] |How about appending exec bash at the end of your .cshrc ? [131007510020] |Beware, though, that this is not entirely risk-free, so you might want to do it in one window/session while testing the results in another, so you have a chance of reverting it. [131007510030] |(Or have a site admin nearby). [131007520010] |Most of that .cshrc is including external files (the source command) that you'll have to translate as well. [131007520020] |The if ( $?prompt ) section is executed only in interactive shells; you don't have to worry about that in bash. [131007520030] |Some of the set commands are setting shell options that don't have exact equivalents; you may want to tune bash completion settings. [131007520040] |The few lines that matter are: [131007520050] |There's no reason why you would change your ~/.cshrc, but you may want to change your ~/.login so that text mode logins drop you into bash, or even zsh if it's available. [131007520060] |Use this at the end of ~/.login: [131007530010] |wget not able to log into ftp [131007530020] |I am trying to use wget for downloading files from a ftp repository. [131007530030] |The FTP repository has a login and password. [131007530040] |I have to go through a proxy server which does not have a login and password. [131007530050] |When I try to use the normal, [131007530060] |wget -r -c ftp://login:pass@download.site.co.in/ [131007530070] |I find that it tries to login to the proxy server itself rather than the ftp server causing it to hang at that stage. using --ftp-user and ftp-password has the same effect. [131007530080] |How can I get around this? [131007540010] |How can I reach the "Clear Recent Documents" under Fedora via the command-line? [131007540020] |What command do I have to type in a terminal to get the same results as if I clicked on the "Clear Recent Documents" menu? [131007540030] |(Fedora 14/GNOME) [131007550010] |In Ubuntu Gnome, the recent documents are stored in ~/.recently-used.xbel. [131007550020] |You can rm that file, but Gnome Panel will still show the list. [131007550030] |There's got to be a better way to get the panel to refresh but killall gnome-panel will work. [131007560010] |SSH Copy Limit? [131007560020] |So I have a small little Ubuntu server set up, with some rather large files that I've allowed a few people to copy from their putty window. [131007560030] |Anyways 2 questions: Lets say a person on windows wants to copy a file using putty from an Ubuntu server. how would they do it? and what directory would it default too? (what would the command be even? would I use SCP?) [131007560040] |Second question: is there like a limit? these are mostly 1-2gb files. would that cause any issues? [131007560050] |Thanks [131007570010] |You can try PSCP which comes as part of the PuTTY distribution. [131007570020] |The usage of pscp is: [131007570030] |For example, from a Windows cmd prompt, type the following command to transfer a file to your C: drive. [131007570040] |I don't believe there is a file size limit. [131007580010] |The default will be the home directory of the linux server account being used. [131007580020] |So if a person used luser@yourserver:foobar.pdf, then the file would be taken from/placed at ~luser/foobar on yourserver. [131007580030] |If you do not want people to be able to write to your server, then I suggest locking down the directories/files and/or settings up a callback script inside the .ssh/authorized_keys file. [131007590010] |How to get only files created after a date with ls ? [131007590020] |With ls command, does it possible to show only file created after a date, hour... [131007590030] |I'm asking it because I have a directory with thousand of files. [131007590040] |I want files created since yesterday. [131007590050] |I use ls -ltr but I have to see all files... [131007590060] |There is an equivalent of DIRECTORY/SINCE=date from OpenVMS ? [131007600010] |You can use the find command to find all files that have been modified after a certain number of days. [131007600020] |For example, to find all files in the current directory that have been modified since yesterday (24 hours ago) use: [131007610010] |rsync Permissions Problem [Ubuntu 10.10, SME Server 7.4] [131007610020] |I am trying to cron an rsync via ssh between two fileservers that are running SME Server 7.4 and Ubuntu 10.10, respectively. [131007610030] |The first rsync worked just fine (for reasons that I do not know), but now, well... here's the output: [131007610040] |(Note: Trust me, I know that TRWTF is the terrible and horrible way that the directory is organized. [131007610050] |That's another project for another day.) [131007610060] |Neither account is root, and I don't want to have to make the accounts root for this to work. [131007610070] |i@i-drive rsyncs just fine to my OSX fileserver, and from the same folder, even. [131007610080] |The account on the OSX box isn't root, either. [131007610090] |Thanks in advance for any help. [131007620010] |I'm pretty sure I got it sorted... [131007620020] |I had to chmod -R 777 the folder on fm-backup, and change the rsync options from ...-avz... to -...rvz.... [131007620030] |The only thing the Ubuntu box is even used for is to remotely back up files, so I don't see that as a problem. [131007630010] |Program to keep track number of login attempts [131007630020] |Is there any way to keep track of number of attempts made by a user to login into his system? [131007640010] |On linux, logins and failed logins are logged in binary format in /var/log/wtmp and /var/log/btmp respectively. [131007640020] |In order to view those logs in human readable format, you need to use the command last or lastb. [131007640030] |You can also check your /var/log/auth.log (which is plain text) for successful / failed authentication attempts. [131007640040] |In OpenBSD there is no /var/log/btmp, but the last command works. [131007640050] |Also, the authlog is in /var/log/authlog. [131007640060] |In Solaris the last command works, but (at least on the system I have access to) authlog seems to be empty. [131007650010] |The Right Distro for text-based needs [131007650020] |I'm searching for the right Linux Distro. [131007650030] |My four current ideas are: [131007650040] |
  • Gentoo
  • [131007650050] |
  • Grml
  • [131007650060] |
  • Arch
  • [131007650070] |
  • Debian
  • [131007650080] |But I'm absoluty open to more alternatives. [131007650090] |Now a little bit more about my needs: [131007650100] |
  • I want it to be really small by default, so I can customize the hell out of it.
  • [131007650110] |
  • I want to use text-tools only, I don't need any graphics on that OS
  • [131007650120] |
  • I'm going to use ZSH, Vim, the NEO Layout and maybe XMonad
  • [131007650130] |
  • I want to have a really nice package manager
  • [131007650140] |
  • The OS will mainly be used for programming
  • [131007650150] |My CPU is an Intel Core 2 Duo, 64 Bit, of course. [131007650160] |My questions are now: [131007650170] |
  • Which package manager is the most advanced: APT, Portage or Pacman?
  • [131007650180] |
  • Which distro fits my needs best?
  • [131007650190] |
  • What is the easiest way to run it from an USB-Flashdrive?
  • [131007660010] |Your requirement 1 say: [131007660020] |I want it to be really small by default, so I can customize the hell out of it. [131007660030] |Then, you DON'T want your package manager to be such advanced. [131007660040] |Anyway AFAIK, portage is better for the "customization" thing, but maybe you want to read more about it because I'd never used it. [131007660050] |APT is really cool, and I'm debian user, but I don't know how much simple do you want your system. [131007660060] |Pacman is really good, and I used Arch for a year. [131007660070] |Arch's system simplicity and customization are pretty and it's BSD-like feeling is really different. [131007660080] |The only thing about pacman is that it's not as intuitive as APT, but as workaround, you have this Pacman Rosetta. [131007660090] |BTW: Maybe you want a Linux system, but if you don't care, you could try FreeBSD, it'ill be nice for you. [131007660100] |Cheers [131007670010] |I've used Debian, Gentoo and Arch for a couple of years each. [131007670020] |The more customizable by far is Gentoo. [131007670030] |But it takes thought each time you want a given package. [131007670040] |Debian is, well Debian: a mainstream distro, that can feel bloated to some. [131007670050] |Given your requirements, I think you might like Arch. [131007670060] |It's pretty lightweight and there are tons of bleeding-edge packages. [131007680010] |/proc/PID/fd/X link number [131007680020] |In Linux, in /proc/PID/fd/X, the links for file descriptors that are pipes or sockets have a number, like: [131007680030] |Like on the first line: 6839. [131007680040] |What is that number representing? [131007690010] |That's the inode number for the pipe or socket in question. [131007690020] |A pipe is a unidirectional channel, with a write end and a read end. [131007690030] |In your example, it looks like FD 5 and FD 6 are talking to each other, since the inode numbers are the same. [131007690040] |(Maybe not, though. [131007690050] |See below.) [131007690060] |More common than seeing a program talking to itself over a pipe is a pair of separate programs talking to each other, typically because you set up a pipe between them with a shell: [131007690070] |Then in another terminal window: [131007690080] |This says that PID 4242's standard output (FD 1, by convention) is connected to a pipe with inode number 222536390, and that PID 4243's standard input (FD 0) is connected to the same pipe. [131007690090] |All of which is a long way of saying that ls's output is being sent to less's input. [131007690100] |Getting back to your example, FD 1 and FD 2 are almost certainly not talking to each other. [131007690110] |Most likely this is the result of tying stdout (FD 1) and stderr (FD 2) together, so they both go to the same destination. [131007690120] |You can do that with a Bourne shell like this: [131007690130] |So, if you poked around in /proc/$PID_OF_SOME_OTHER_PROGRAM/fd, you'd find a third FD attached to a pipe with the same inode number as is attached to FDs 1 and 2 for the some-program instance. [131007690140] |This may also be what's happening with FDs 5 and 6 in your example, but I have no ready theory how these two FDs got tied together. [131007690150] |You'd have to know what the program is doing internally to figure that out. [131007700010] |It is possible to use colors on the Unix command-line interface (CLI). [131007700020] |Commands like less, ls, vim and grep support colors. [131007700030] |The Bash command prompt can also support colors. [131007710010] |It is possible to use colors on the Unix command-line interface [131007720010] |SVN changelist all my checkouts [131007720020] |Lets say I have 100+ files that I've checked out. [131007720030] |Is there a way I can add all of them to a changelist without specifying them one by one, or adding them to file? [131007730010] |You are not really clear about what you want: you say that you checked 100+ files out, but you want to add them to the repository. [131007730020] |I'll go over both adding files already in a workspace and committing modifications in a workspace. [131007730030] |To add a directory tree to an already checked out working copy, use svn add --force . which will add all the unversioned files in your working copy starting at the current directory (even though the current directory is versioned). [131007730040] |The added files will still need to be committed to the repository. [131007730050] |To commit modifications, you can run svn commit -m "*insert comment here* . which will commit all the changes in the working copy from the current directory down to the repository. [131007740010] |Was operating system are you using? [131007740020] |I'm going to assume Linux. [131007740030] |Try something like this: [131007740040] |This will add all C and header files in the current and immediate subdirectories to the changelist source. [131007740050] |The above command may have trouble with whitespace in filenames. [131007740060] |For something more sophisticated, try this on for size: [131007740070] |This will add all normal files not located under .svn directories to the changelist source. [131007740080] |This also works with whitespace in filenames. [131007750010] |How do I set up a BitTornado tracker? [131007750020] |As the title suggests I would like to setup a BitTornado tracker for a local network. [131007750030] |At the moment I am using OpenBitTorrent as my tracker but this seems strange as I don't want to do this over the net and I'm not sure how secure or legal it is. [131007750040] |The "server" is running Bodhi Linux. [131007750050] |I found a how-to on TuxCoder using this command to start the tracker [131007750060] |However I get back a deprecation warning: [131007750070] |It may be that there is a better tracker out there. [131007750080] |I'm not an expert, which is why I'm asking this question. [131007760010] |It's simply a warning, you can just ignore it. [131007760020] |There is a command line flag to python to make it ignore that class of warnings: -W ignore::DeprecationWarning (it's ignored by default starting at version 2.7) [131007760030] |Personally I'm still using bttrack even though it's getting old and does not support the latest improvements to the Bittorrent protocol: it's efficient, easy to use and reliable. [131007760040] |I've got a couple of trackers and seeders up and running to support a Linux distribution. [131007770010] |How to manage huge amount of files in shell? [131007770020] |$ ls ./dir_with_huge_amount_of_files/errors/ [131007770030] |Suppose a dir is full of pictures with unix timestamps, I mean a lot measured in many GBs or even more. [131007770040] |Shell-commands like ls will get overflow-style warnings because they are not designed to work with millions (or more) of pictures. [131007770050] |How can I manage such huge amount of files? [131007770060] |If I for example want to find the picture in the middle (according to timestamp in the name and creation time), is there some fs that offers a built-in search feature? [131007770070] |Which commands would you use? [131007770080] |I tried the comfortable ls and find with necessary flags but they were either very slow or warnings so I am thinking that either I need better fs or db or something like that to pre-index the pictures. [131007770090] |I basically need one array to which inodes of the photos should be placed in chronological order. [131007770100] |How to do that? [131007770110] |Later metadata with unix-timestamps could be added. [131007780010] |Would locate (and of course updatedb) be of any help to you? [131007790010] |Try a different shell. [131007790020] |I'd recommend trying zsh for instance, and see if it allows more parameters. [131007790030] |If I understand correctly, part of the filename, is a UNIX timestamp. [131007790040] |It might be advisable, to divide the files into folders. [131007790050] |If the date/time format is a UNIX epoch number, put chunks of fractions of that number, say 10000's, in a separate folder. [131007790060] |If an ISO 8601 timestamp is part of the filename, simply divide by year, month or day. [131007800010] |Small Distributed Computing Cluster [131007800020] |I'm a high school student trying to build a linux cluster for a project (I have a bunch of decent computers slated for re-image this summer, so the tech department basically says as long as I don't physically break them I can do whatever. [131007800030] |Anyway, I don't really know anything about building a cluster, but I'm pretty good with Linux. [131007800040] |I need to know these things: -What distro should I use? [131007800050] |Does it even matter? -What software can configure the cluster? -On board or distributed FS? -Any sites that can offer decent guides or how-tos? [131007810010] |Try Linux HA (High Availability) it is a freely available Linux cluster solution that works on several distributions. [131007810020] |It's probably only one of several solutions. [131007810030] |I don't know how it compares with others, or even what its specific features are, I just know that some workmates swore by it for serious commercial software. [131007820010] |It really depends on what you are trying to accomplish, and what you mean by "Distributed Computing Cluster." [131007820020] |I did a similar thing once in Uni using old machines and PVM that's the "Cluster" in the sense of a bunch of machines acting as one single computer to do parallel processing - think Beowulf clusters. [131007820030] |Of course, you will need code that is written to take advantage of this. [131007820040] |A good place to start, would be determining what you are looking to learn with this project. [131007820050] |I recommend reading the wikipedia article on Parallel Computing for starters, and then refining your needs based on what you want to do. [131007820060] |A simple job queuing system (like gearman) may be enough to get some cool results quickly. [131007820070] |The problem I had when I made a parallel computing cluster, was I didn't have anything to do on it, it just basically sat there, but it was a fun project and I learned quite a bit. [131007820080] |In any case, you are likely to learn quite a bit and have fun at the same time, regardless of what you choose to implement. [131007820090] |As far as choice of distributions, I would go with what I was most comfortable with, as you will likely need to install things from source. [131007820100] |Once you are comfortable getting everything setup, then you can look into finding a distribution that is tailored more towards your needs. [131007820110] |But any distribution should do. [131007820120] |What software to configure the cluster? [131007820130] |This depends entirely on what type of cluster you create. [131007820140] |On board vs. distributed FS? [131007820150] |Again, this depends on what the requirements for your cluster are. [131007820160] |Will each node be passing data back and forth among the other nodes? [131007820170] |Will they operate as slaves with a single master? will they operatae completely independently? [131007820180] |These questions will start to inform your choices. [131007820190] |And of course, there are always trade offs. [131007820200] |Some other links that might prove interesting: [131007820210] |http://hadoop.apache.org/ [131007820220] |http://www.csm.ornl.gov/oscar/ [131007820230] |https://computing.llnl.gov/tutorials/parallel_comp/ [131007820240] |http://www.google.com/Top/Computers/Parallel_Computing/Programming/Environments/ [131007820250] |http://www.google.com/Top/Computers/Parallel_Computing/Beowulf/ [131007830010] |No sound when attached to docking station [131007830020] |Hello guys, [131007830030] |I have been using Linux and Ubuntu only since the last two months on my (2006) Sony Vaio VGN A617S laptop which is going pretty well on Ubuntu 10.10. [131007830040] |When installing Ubuntu audio did get installed alright, and the speakers on the laptop itself are working well even now with the right drivers (snd-hda-intel) for the Realtek ALC260 soundcard. [131007830050] |My problem is when laptop is docked to the A/V docking station (I have seen it referred as a port replicator as well - the model number is VGP-PRAV2), with its two external speakers. [131007830060] |And these were not on when Ubuntu was being installed. [131007830070] |What is interesting is the headphones jack on the laptop does work when the docking station does not. [131007830080] |I have been trying hard and have already tried quite a few things I saw to be related issues, like: a) adjustments to some IEC958 in alsamixer b) using a custom mod for the ALC260 card [131007830090] |All this is to no avail. [131007830100] |If someone could help me with this, that would be really appreciated. [131007830110] |Thank you. [131007840010] |I have similar issues in Windows, while using HP laptops and its docking stations. [131007840020] |These kind of issues can be solved by connecting laptop to dock before starting the OS. [131007850010] |How can I enable internet sharing without using the gui or apple script on snow leopard? [131007850020] |for some rather strange reason my sharing preferences tab crashes (it's a long story and there seems to be no good solution for it, it's looking for a ui object that no longer exists). [131007850030] |Anyway, I want to enable internet sharing to share my macbook's internet connection with my iPad but I can't find a way to do it without the gui or apple script (which basically calls the gui). [131007860010] |Add new user in solaris [131007860020] |I tried to add a new user in solaris 10, but got an error: [131007860030] |I ran pwconv, but nothing was displayed. [131007860040] |I also tried to sync the shadow &passwd files, but there's still an error: [131007870010] |Check the integrity of /etc/passwd and related files with pwck. [131007870020] |Do the same for group related files with grpck. [131007880010] |I am failing to 'git clone clutter' [131007880020] |While trying to build clutter from JHBuild, I get stuck at the stage where I have to clone it: [131007880030] |It just chills there, and it seems nothing is happening at all. [131007880040] |When I check the bandwidth usage, there's hardly anything. [131007880050] |update 1: Well, I wonder what happened, but it works now. [131007880060] |Probably slow network on my (or their) side. [131007880070] |update 2: One of JHBuild developers (on IRC): [131007880080] |git.clutter-project.org is unreliable; it will soon be moved to git.gnome.org, afaik. [131007890010] |Where can I see a list of kernel killed processes? [131007890020] |Is there some way I can check which of my processes the kernel has killed? [131007890030] |Sometimes I log onto my server and find that something that should've run all night just stopped 8 hours in and I'm unsure if it's the applications doing or the kernels. [131007900010] |If the kernel killed a process (because the system ran out of memory), there will be a kernel log message. [131007900020] |Check in /var/log/kern.log (on Debian/Ubuntu, other distributions might send kernel logs to a different file, but usually under /var/log under Linux). [131007900030] |Note that if the OOM-killer (out-of-memory killer) triggered, it means you don't have enough virtual memory. [131007900040] |Add more swap (or perhaps more RAM). [131007900050] |Some process crashes are recorded in kernel logs as well (e.g. segmentation faults). [131007900060] |If the processes were started from cron, you should have a mail with error messages. [131007900070] |If the processes were started from a shell in a terminal, check the errors in that terminal. [131007900080] |Run the process in screen to see the terminal again in the morning. [131007900090] |This might not help if the OOM-killer triggered, because it might have killed the cron or screen process as well; but if you ran into the OOM-killer, that's the problem you need to fix. [131007910010] |Process Accounting could help here. [131007910020] |In brief: [131007910030] |Then try commands like: [131007910040] |or on Ubuntu: [131007910050] |See: [131007910060] |
  • http://tldp.org/HOWTO/Process-Accounting/pasetup.html
  • [131007910070] |
  • http://tldp.org/HOWTO/Process-Accounting/misccommands.html
  • [131007910080] |UPDATE [131007910090] |Strangely, the pacct file has information about exit status, but neither lastcomm nor sa seem to print it. [131007910100] |So as far as I can see, you'd have to write your own C program to access the information. [131007910110] |UPDATE 2 [131007910120] |Here's a version that prints the exit code. [131007910130] |The last two fields are "S" for signaled and "E" for exited, followed by the signal number or exit status. [131007910140] |So in your case, you're probably looking for "S 15" meaning it got a SIGTERM. [131007910150] |Compared to "E 0" which means the process exited without an error. [131007910160] |Only minimally tested. [131007910170] |
  • http://mikelward.com/software/lastcomm.exitcode.patch
  • [131007910180] |
  • http://mikelward.com/software/lastcomm
  • [131007920010] |sudo service --status-all [131007920020] |This command will tell you the what are the services are currently running and which are not started or stopped.. [131007930010] |Open Source firmware for e-ink book reader [131007930020] |Is there any open source version of firmware for ebook readers or maybe project like RockBox? [131007930030] |AFAIK most of e-ink readers runs linux, so maybe exists at least one model of e-reader with an open specification and/or drivers for which you can build your own custom linux ? [131007940010] |Yes, there exists some information. [131007940020] |The driver for the e-ink display is broadsheetfb. [131007940030] |However, attempting to do it is probably non-trivial since this is not something that is commonly done. [131007940040] |That said, good luck and if you succeed in this, please post details! [131007950010] |How to start a new Linux distro? [131007950020] |Me and some of my friends are interested in starting a new Linux distro. [131007950030] |How to do that? [131007950040] |What do we need to plan? [131007950050] |

    Backstory

    [131007950060] |I represent a community of Linux sysadmins/implementors whose special needs include, among others: [131007950070] |
  • A specific 'lean' kernel config
  • [131007950080] |
  • Package management that fits our 'field needs'
  • [131007950090] |
  • Binary packages optimized for our 'use cases'
  • [131007950100] |
  • X-less system
  • [131007950110] |To the point: We have needs for specially-configured production-quality Linux to be run exclusively as Para-Virtualized Production Servers. [131007950120] |Rather than jumping through all the hoops and loops every time we need a VM-ized Server, we would very much like a semi-prepared system, optimized for its environ. [131007950130] |Since these VMs would be Production Servers, stability is a must, and honestly the available package management systems we're currently aware of just does not provide assurance. [131007950140] |Zypp and Conary are the closest ones to our needs, but again still miss on some points. [131007960010] |If you just want some set of default applications, you can customize an existing distro like ubuntu using some simple tools. http://maketecheasier.com/reconstructor-creating-your-own-ubuntu-distribution/2008/07/05 [131007970010] |You might want to look at Linux From Scratch: [131007970020] |Linux From Scratch (LFS) is a project that provides you with step-by-step instructions for building your own customized Linux system entirely from source. [131007980010] |You didn't really specify what you want from the package manager. [131007980020] |But OpenSuse provides build service where you can easily customize any package (including the kernel) and even create a whole distribution. [131007980030] |http://en.opensuse.org/Portal:KIWI [131007980040] |https://build.opensuse.org [131007990010] |You will need a minimal running system, likely from another distro, to "bootstrap" your own distro with enough to at least get gcc or another C compiler running. [131007990020] |You then need to start by deciding what core libraries (including libc) and software comprises the base, "no-packages-installed" state of your system. [131007990030] |Then, get the source to these libraries and software and compile them, make sure all the software can find the libraries it needs, and start creating your low-level base environment. [131007990040] |Basically your bootstrap environment will be nothing more than a running kernel and the absolute minimum you need to get a basic shell, a C compiler, and basic things like rm, cp, tar and stuff like that working. [131007990050] |The next thing you should get up and running after that is Perl. [131007990060] |Once you have your base system created, you need to persist it and create some boot scripts that takes the system from initial boot to a useable shell with a compiler. [131007990070] |Then you need to design/write a package system and format, and download the source code to the software you want to package, compile and package it, and design a robust distribution system for your packages. [131007990080] |None of this is trivial. [131007990090] |Good luck. [131008000010] |How to run commands in batch mode over ssh? [131008000020] |How can I run commands in batch mode over ssh? [131008000030] |That is, what is the ssh command's equivalent of sftp -b ? [131008000040] |I have a set of commands which I wish to run across a set of hosts connecting over ssh. [131008000050] |Over sftp, i store the commands in a file filename and connect to the host and run the commands using the previously mentioned command. [131008000060] |Is something like that possible over ssh? [131008010010] |man expect? :\ [131008010020] |but it's not the perfect way. [131008020010] |Correct me if I'm wrong, but you seem to be wanting to run regular shell commands on the remote server where the script is local. [131008020020] |I do this with some 'remote execution' apps in my test environment using Python instead of the shell: ssh $userhost python <$pythonscriptfilename. [131008030010] |perhaps [131008040010] |You could use ssh forced commands. [131008040020] |These are associated with a particular key. [131008040030] |When an authentication is done with that key, that command is run and the connection exits. [131008040040] |One advantage of this approach is increased security, since in that case the key can't be used to get to a login shell. [131008050010] |How about to keep it simple and run the "batch" file on the other computer? [131008050020] |
  • scp batch-file user@pc
  • [131008050030] |
  • ssh user@pc batch-file
  • [131008050040] |
  • ssh user@pc rm batch-file
  • [131008050050] |And the batch file would be a normal shell script so the syntax is well known. [131008060010] |The SSH equivalent of sftp -b would be: [131008060020] |ssh -o BatchMode=yes sh -s <"" [131008070010] |OpenBSD patch system [131008070020] |If I install OpenBSD from CD-ROM: http://www.openbsd.org/ftp.html with install48.iso then is it patched? [131008070030] |
  • All 10 patches from here are in the ISO file?
  • [131008070040] |
  • If those are not included, how can I apply these patches? [131008070050] |Is there a one-liner command (like under Fedora: yum upgrade or Debian based, apt-get upgrade) or do I have to download and apply all 10 patches one by one?
  • [131008080010] |Currently OBSD 4.8 is the last released version, so: [131008080020] |ftp://ftp.openbsd.org/pub/OpenBSD/4.8/amd64/ install48.iso - 225 MByte [131008080030] |ftp://piotrkosoft.net/pub/OpenBSD/snapshots/amd64/ install48.iso - 229 MByte [131008080040] |So this means the [131008080050] |ftp://piotrkosoft.net/pub/OpenBSD/snapshots/amd64/install48.iso [131008080060] |is the original 4.8 release + all the 10 patches? [131008080070] |[it's a little bigger then the original one, that's why i suppose this] [131008080080] |If someone says it's true, so that there is an install48.iso that contains all the patches [in the snapshot directory], than it's answered!! :) [131008090010] |The canonical reference for this is The OpenBSD FAQ - 5.1 [131008090020] |The install4.8.iso in the 4.8 directory is the 4.8 before patches. [131008090030] |So, if you want the patches, you need to install 4.8 then patch your system yourself. [131008090040] |The install48.iso in the snapshots directory is more than just the patches to the OS listed on the errata page, it's also everything new that is being developed as the system moves towards 4.9. [131008090050] |Snapshots are just that "snapshots" of the code as it's moving towards the next release. [131008090060] |So, to answer your question, no. [131008090070] |If you install using the install48.iso CD, you will not have a patched system, you will need to apply the patches yourself. [131008090080] |For information on applying these patches, see each individual patch. [131008090090] |You may also choose to follow the "stable" branch of OpenBSD, the reference is OpenBSD - Following stable, which includes these patches already. [131008090100] |In either case, you will have to have a checkout of the OpenBSD source. [131008090110] |There is no one-liner, or automated way to apply these patches. [131008100010] |Debian 6.0 and Xen PyGrub failure [131008100020] |On my VPS (running Debian 6.0 on Xen with PyGrub) I get the following error when trying to upgrade the system: [131008100030] |I googled and found this solution: [131008100040] |Apparently that only works on older systems and not on my server. [131008100050] |Any ideas? [131008110010] |How do you change the root password on Debian? [131008110020] |I want to change the password I assigned to root on my Debian webserver to something longer and more secure. [131008110030] |Bit of an obvious newbie question, but how do I do that? [131008110040] |I haven’t forgotten/lost the current password, I just want to change it. [131008120010] |Ah, use the passwd program as root: [131008120020] |Or, if you’re running as root already (which you shouldn’t be), just: [131008120030] |The root argument can be omitted, because when you execute passwd it defaults to the current user (which is root, as only root can change the root password). [131008130010] |If you're going to be doing a lot of command-line administration, you might find it useful to check out the man pages for usermod(8), chfn(1), chsh(1), passwd(1), crypt(3), gpasswd(8), groupadd(8), [131008140010] |What params do I pass to grep to return only file names? [131008140020] |I'm trying to use grep to find a specific piece of text in a bunch of files on my web server. [131008140030] |No problem, except that it returns way more information than I want! [131008140040] |Ideally it would just return a list of files, and if the text exists in more than one place in the file it would only list the file name once. [131008140050] |Currently I'm using something like this: [131008140060] |to do a case-insensitive recursive search for the word essay_ in all directories of my site. [131008140070] |What it returns is something like this: [131008140080] |What I'd like to get back is: [131008150010] |from grep manpage [131008160010] |You want the -l (el) or --files-with-matches command line switch(es) [131008160020] |Suppress normal output; instead print the name of each input file from which output would normally have been printed. [131008160030] |The scanning will stop on the first match. (-l is specified by POSIX.) [131008170010] |Best way to archive attachments? [131008170020] |My saved-messages and sent-mail "folders" (actually Unix MBX files) are huge because of attachments, most of which I've saved to disk anyway. [131008170030] |I want to keep the messages, but replace the attachment w/ a text file saying "Attachment removed: /full/path/to/attach.txt". [131008170040] |How do I do this? [131008170050] |I'm using Alpine, but any tool that does this for Unix MBX is fine. [131008170060] |Alpine does let me delete attachments from emails, but I can't replace them w/ a text file. [131008170070] |Notes: [131008170080] |
  • I realize I can save the message to a file and edit the file using emacs, but that's kludgey and probably messes up "Content-Length" headers and stuff.
  • [131008170090] |
  • I also realize I can forward the message, with headers, to myself after removing the attachment. [131008170100] |Again, kludgey.
  • [131008170110] |
  • I don't think Alpine lets me add attachments to stored mail (unless I want to send it somewhere [which messes up headers]), so I can't delete the big attachment and add a smaller one.
  • [131008170120] |
  • I realize I could write a Perl script to do this, but hoping for an existing well-tested solution.
  • [131008180010] |OK, I poked aorund, and when Alpine "deletes" an attachment, it actually replaces it with something like: [131008180020] |I can then use emacs to edit this message (and it doesn't mess up any Content-Length headers or anything) [131008190010] |I use Thunderbird/icedove with the AttachmentExtractor add-on for this. [131008200010] |Differences in package management between Debian and Arch [131008200020] |A discussion from this post made me curious of differences between Debian and Arch package management. [131008200030] |I'm not talking usage, but more of handling dependencies. [131008200040] |Also, people tend to say that Arch is very light-weight, so I wonder what that has to do with package management. [131008200050] |Is it maybe because Debian treats Recommends as hard dependencies by default? [131008200060] |Can you also mention the flexibility/power between the two package managers: which of the two lets you do more. [131008200070] |I'm aware that some features available on a Debian package management system would be irrelevant on an Arch system, since Arch has a single Suite and Debian has multiple (e.g. APT pinning and advanced dependency handling come to mind), so please compare features that are applicable to both systems (i.e. assume that for Debian, I use only on Unstable). [131008210010] |I just use arch regularly since a few weeks and am no expert on the subject so this answer is in no means exhaustive, just a few points I have noted about the "flexibility/power": [131008210020] |
  • This is just an impression but pacman seems more modern and simple in its design/architecture. [131008210030] |At least there is far less tools to deal with. [131008210040] |While I don't know of apt source code, I just happened to look at libalpm code (the underlying library to pacman) to make a very simple patch, and it seems clean and easy to understand.
  • [131008210050] |
  • It is also very fast (due to optimization and also probably by caring about few things (see below)). [131008210060] |The last release (pacman 3.5, a few days old) tried to improve performance by reducing the number of involved database files.
  • [131008210070] |
  • While arch is oriented towards the use of binary packages, it also has advantages when building packages from source, with a build system similar to BSD's ports (ABS).
  • [131008210080] |
  • It's very easy and quick to create packages, just a few lines in a PKGBUILD file and its done, no need to deal with control/rules/copyright/changelog/whatever like with Debian packages. [131008210090] |And in a few clicks on a web ui your package is shared with everyone on AUR (Arch User Repository).
  • [131008210100] |Things I get in Debian and not in arch : [131008210110] |
  • Triggers/hooks (what makes apt update the icon cache, the mandb or whatever just by looking at where the package install files, with no need for the packager to do anything) (seems there are plans to implement this).
  • [131008210120] |
  • debconf (no big deal and by the way by forcing me to do things manually it forces me to know what exactly is done) and proper handling of new config files (I would at least like pacman to know when a config file in a new package version is different of the installed one because it was changed in the new version or because I modified it locally).
  • [131008210130] |
  • package signing (seems it's being worked on).
  • [131008210140] |For arch being light, the only real reason is that it comes with few packages installed by default and you're encouraged to add what just you need, so probably not installing optional dependencies by default is inciting users to install avoid bloat. [131008220010] |Delete files in a directory that match a regexp, using a Mac terminal [131008220020] |How do I delete files in a directory that match a given regexp, or a similar solution, using a Mac terminal? [131008230010] |use the find command. [131008230020] |Find all files (recursively) matching a regex: find . -type f -regex '/ex/' Find all files (recursively) matching a regex and delete them: find . -type f -regex '/ex/' -exec rm {} \; [131008230030] |The brackets store the found pathname, and the backslash escapes the semicolon because it's passed to find; without escaping it, it would be consumed by the shell. [131008230040] |If that went over your head, read the first two chapters of "Learning the Bash Shell". [131008230050] |check the man pages for find for more options. [131008230060] |There are a lot more ways to search. [131008240010] |Very similar to jorelli's answer. [131008240020] |This is what I use: [131008240030] |The -print0 and -0 arguments cause find to output a char 0 separated list and xargs to perform the rm command in each element of the list, so paths with spaces are not a problem. [131008250010] |YaST2: Command-line equivalents to GUI naviagtion [131008250020] |I'd like to automate some interactions with yast2. I assume I can do everything on the command line that I can do in the curses interface, but I'm not sure how to figure out what the commands are. [131008250030] |For example, if I want yast2 to use a local ISO as a package repository, I know how to do it through the curses GUI (Software->Add-On Productions, Add, Local ISO Image, Browse, ...). [131008250040] |Is there a way to identify these interactions with arguments that can be passed to yast2 on the command-line? [131008260010] |It seems you can't do as much with command line than with the ncurses interface, as yast modules have to individually implement support for CLI. [131008260020] |According to openSUSE 11.1 Reference Guide: [131008260030] |To use YaST functionality in scripts, YaST provides command line support for individual modules. [131008260040] |Not all modules have a command line support. [131008260050] |To display the available options of a module, enter: [131008260060] |yast help [131008260070] |If a module does not provide command line support, the module is started in text mode and the following message appears: [131008260080] |This YaST module does not support the command line interface. [131008260090] |(use yast --list to list modules) [131008270010] |There are no items in GNOME main menu after openSuse 11.4 upgrade. [131008270020] |I've upgraded my openSuse os from 11.3 to 11.4. [131008270030] |Everything went smoothly but now I cannot see any item in main menu (analog of Start in Windows) and I cannot set any background picture, so now I see a single color desktop. [131008270040] |KDE desktop is working properly. [131008270050] |Machine specs: Laptop AMD Turion TL-56 Video Card Nvidia Go 7200 (driver from NVidia is installed) [131008270060] |I think there are some settings which are now preventing GNOME to initialize properly still cannot find any that could change the behavior. [131008280010] |Worse case, try removing your existing gnome configuration and see if it fixes it. [131008280020] |Preferably do this from a shell when not logged into Gnome: [131008280030] |And log back in to Gnome. [131008280040] |If this fixes it, you can either accept it or restore your configuration and dig deeper for a solution. [131008290010] |Configure Python to include another directory when looking for packages [131008290020] |I'm on a SUSE machine where the default Python site-packages location is /usr/lib64/python2.6/site-packages. [131008290030] |Some packages automatically install themselves in /usr/lib/python2.6/site-packages instead. [131008290040] |How do I configure Python so that it also looks in /usr/lib64/python2.6/site-packages? [131008300010] |Use sys.path: [131008300020] |You can also check the site module documentation, which explains how the site-specific paths are computed. [131008310010] |(Please correct errors and omissions as necessary. [131008310020] |Thanks.) [131008310030] |First, a question and a comment. [131008310040] |I don't use SUSE, so take this with a pinch of salt. [131008310050] |Are the packages that install in /usr/lib/python2.6/site-packages official packages? [131008310060] |If so, SUSE is broken, so that is not likely. [131008310070] |If they are not official packages, you could either ask the packagers to use the standard paths, or, alternatively, you could submit a wishlist bug to SUSE asking them to support this additional path. [131008310080] |This will save you and other people additional headaches. [131008310090] |For the moment, you have the following possibilies, in order of descreasing scope: [131008310100] |
  • Change the module search path for all users (method 1) [131008310110] |Change the module search path in the Python installation. [131008310120] |The default module search path is hardwired into the binary. [131008310130] |Add-on paths can be configured at runtime, for example in the site.py file. [131008310140] |For example, Debian uses /usr/lib/python2.6/site.py (for the default python 2.6 installation) to do its site-specific configuration. [131008310150] |At the top of the file is written [131008310160] |The Debian patch debian/patches/site-locations.diff says [131008310170] |For Debian and derivatives, this sys.path is augmented with directories for packages distributed within the distribution. [131008310180] |Local addons go into /usr/local/lib/python/dist-packages, Debian addons install into /usr/{lib,share}/python/dist-packages. /usr/lib/python/site-packages is not used. [131008310190] |The patch in question is [131008310200] |So you could modify the site.path in your system package to produce a modified module search path. [131008310210] |You probably don't want to this, though. [131008310220] |For one thing, you will have to merge this in on every update of your distribution's python package.
  • [131008310230] |
  • Change the module search path for all users (method 2) [131008310240] |Add a file of the form something.pth to a directory that is already in the search path, which contains a path, either relative or absolute. [131008310250] |Eg. [131008310260] |In another terminal do [131008310270] |
  • Change the module search path for all users (method 3) [131008310280] |The environment variable PYTHONPATH is normally used to append to the system path at user level. [131008310290] |You can put it in a file which will be sourced by all users. [131008310300] |Eg. in Debian we have /etc/bash.bashrc, which says at the top [131008310310] |So you could add or PYTHONPATH there. [131008310320] |You probably want it to be sourced for both login and interactive shells, so you'll want to check on that. [131008310330] |Unfortunately, distributions are often flaky about enabling this. [131008310340] |The paths in PYTHONPATH are added to the default list of search paths in the system (which can be obtained for example by sys.path - see below). [131008310350] |Allowing for the possibility that PYTHONPATH is set already, just add desired additional directories to it, eg. [131008310360] |If you source the PYTHONPATH variable, and then check sys.path again, you will see the paths have been added. [131008310370] |Note that the position in which the paths in PYTHONPATH are added to the pre-existing paths does not seem to be prescribed by the implementation.
  • [131008310380] |
  • Change the module search path per user. [131008310390] |The usual way is to change PYTHONPATH in the user's bashrc, namely ~/.bashrc. [131008310400] |Again, check that it is sourced for both login and interactive shells.
  • [131008310410] |
  • Change the module search path on a per script basis. [131008310420] |This is done by appending to sys.path, namely [131008310430] |This will only work for the script that is importing this. [131008310440] |This is normally used, as far as I know, for casual use, when importing modules in nonstandard locations, like from somewhere in a home directory.
  • [131008310450] |See also Greg Ward on Modifying Python's Search Path. [131008310460] |This has a good discussion of the available alternatives. [131008320010] |Install Ubuntu over Suse without affecting Windows [131008320020] |I have Windows XP and Linux Suse 11 installed on my laptop for some time. [131008320030] |I want to replace my Suse installation with Ubuntu and would like to know if it is possible to do that without affecting the Windows installation. [131008320040] |I have the Grub boot-loader that came with Suse. [131008320050] |What are the steps to follow (I didn't do the initial installation so I don't know if it is safe or not)? [131008320060] |Thank you! [131008330010] |It's easy and low-risk. [131008330020] |Just do the installation normally, and when the time comes to partition the disk, choose a manual partitioning strategy and make sure you override the Suse partition(s) only. [131008330030] |Ubuntu will want to override Suse's bootloader with its own. [131008330040] |Let it: Grub needs some files in /boot, which you're going to overwrite. [131008330050] |The Grub installer will automatically detect all installed operating system, so you'll still be able to boot both Linux and Windows. [131008330060] |I don't know how Suse 11 configures Grub; the way to configure which OS gets booted by default might be different, so you should take a quick look at the Ubuntu Grub community documentation. [131008340010] |ssh DISPLAY variable [131008340020] |Once upon a time, [131008340030] |after ssh 'ing into my desktop from my laptop would cause totem to play movie.avi on my desktop [131008340040] |now it gives error [131008340050] |I reinstalled debian squeeze when it went stable on both computers, and I guess I broke the config. [131008340060] |I've googled on this, and cannot for the life of me figure out what I'm supposed to be doing. [131008340070] |vlc has a http interface that works, but it isnt as convenient as ssh [131008340080] |and, yes, i want to figure this out because I hate having to walk over to my desktop to play an episode of Grey's. [131008350010] |You don't want that $ on the front of DISPLAY=:0.0. [131008350020] |That said, what's the error message? [131008360010] |You need to export DISPLAY=:0.0 [131008370010] |(Adapted from Linux: wmctrl cannot open display when session initiated via ssh+screen) [131008370020] |

    DISPLAY and AUTHORITY

    [131008370030] |An X program needs two pieces of information in order to connect to an X display. [131008370040] |
  • It needs the address of the display, which is typically :0 when you're logged in locally or :10, :11, etc. when you're logged in remotely (but the number can change depending on how many X connections are active). [131008370050] |The address of the display is normally indicated in the DISPLAY environment variable.
  • [131008370060] |
  • It needs the password for the display. [131008370070] |X display passwords are called magic cookies. [131008370080] |Magic cookies are not specified directly: they are always stored in X authority files, which are a collection of records of the form “display :42 has cookie 123456”. [131008370090] |The X authority file is normally indicated in the XAUTHORITY environment variable. [131008370100] |If $XAUTHORITY is not set, programs use ~/.Xauthority.
  • [131008370110] |You're trying to act on the windows that are displayed on your desktop. [131008370120] |If you're the only person using your desktop machine, it's very likely that the display name is :0. Finding the location of the X authority file is harder, because with gdm as set up under Debian squeeze or Ubuntu 10.04, it's in a file with a randomly generated name. [131008370130] |(You had no problem before because earlier versions of gdm used the default setting, i.e. cookies stored in ~/.Xauthority.) [131008370140] |

    Getting the values of the variables

    [131008370150] |Here are a few ways to obtain the values of DISPLAY and XAUTHORITY: [131008370160] |
  • You can systematically start a screen session from your desktop, perhaps automatically in your login scripts (from ~/.profile; but do it only if logging in under X: test if DISPLAY is set to a value beginning with : (that should cover all the cases you're likely to encounter)). [131008370170] |In ~/.profile: [131008370180] |Then, in the ssh session: [131008370190] |
  • You could also save the values of DISPLAY and XAUTHORITY in a file and recall the values. [131008370200] |In ~/.profile: [131008370210] |In the ssh session: [131008370220] |
  • You could detect the values of DISPLAY and XAUTHORITY from a running process. [131008370230] |This is harder to automate. [131008370240] |You have to figure out the PID of a process that's connected to the display you want to work on, then get the environment variables from /proc/$pid/environ (eval export $(~/.Xauthority. [131008370280] |There can be a security issue if your home directory is accessible over NFS or other network file system that allows remote administrators to view its contents. [131008370290] |They'd still need to connect to your machine somehow, unless you've enabled X TCP connections (Debian has them off by default). [131008370300] |So for most people, this either does not apply (no NFS) or is not a problem (no X TCP connections). [131008370310] |To copy cookies when you log into your desktop X session, add the following lines to ~/.xprofile or ~/.profile (or some other script that is read when you log in): [131008370320] |¹ In principle this lacks proper quoting, but in this specific instance $DISPLAY and $XAUTHORITY won't contain any shell metacharacter. [131008380010] |Why did my package get installed to /opt? [131008380020] |A while back I installed couchdb from source on my debian machine. [131008380030] |It seemed to install okay and everything worked. [131008380040] |A few weeks later I wanted to upgrade and installed from a .deb using dpkg. [131008380050] |Everything now seems to have been installed under /opt. [131008380060] |It all seems to work fine, but I don't really understand why it's all been installed under /opt. [131008380070] |It doesn't feel as "smooth" as when things were installed directly into /usr/local/bin and /etc and /var/log. [131008380080] |It's more cumbersome to configure than when I previously just edited the configuration files that were installed under /etc/couchdb [131008380090] |Would someone mind explaining to me why installation to /opt is "a good thing" and why this is a better way of doing things than the previous installation which was from the source? [131008380100] |I know this is a rather vague question but I'm only competent with Linux not an expert and don't understand the thinking behind installing to /opt [131008390010] |/opt is for third-party software. [131008390020] |There is a couchdb package for debian (I mean, official), so if you downloaded another, it's Ok, it must be installed in /opt or /usr/local/bin either. [131008400010] |The Filesystem Hierarchy Standard gives these definitions: [131008400020] |
  • /opt : Add-on application software packages
  • [131008400030] |
  • /usr/local : Local hierarchy (for use by the system administrator when installing software locally)
  • [131008400040] |The way I read that: [131008400050] |
  • Standard system applications should go in /bin and /usr/bin (implied)
  • [131008400060] |
  • Third-party packages should go in /opt
  • [131008400070] |
  • Something should only be installed into /usr/local if the system administrator wants it to
  • [131008400080] |By extension, if the sysadmin installs something using dpkg or rpm, it should not go into /usr/local by default. [131008400090] |So it's arguably doing the right thing. [131008410010] |Debian Policy says [131008410020] |9.1.2 Site-specific programs [131008410030] |As mandated by the FHS, packages must not place any files in /usr/local, either by putting them in the file system archive to be unpacked by dpkg or by manipulating them in their maintainer scripts. [131008410040] |There is no such specific prohibition against /opt. [131008410050] |Policy also adds [131008410060] |The location of all installed files and directories must comply with the Filesystem Hierarchy Standard (FHS), version 2.3, with the exceptions noted below, and except where doing so would violate other terms of Debian Policy. [131008410070] |and the File Hierarchy Standard says [131008410080] |The directories /opt/bin, /opt/doc, /opt/include, /opt/info, /opt/lib, and /opt/man are reserved for local system administrator use. [131008410090] |and then further down [131008410100] |Distributions may install software in /opt, but must not modify or delete software installed by the local system administrator without the assent of the local system administrator. [131008410110] |Note that Policy is for Debian itself, but it generally corresponds to a best bractice recommendation. [131008410120] |The upshot, if I am reading this correctly, is that it is not Ok to install binary (deb) packages to /usr/local, but it is Ok to install in /opt as long as it does not interfere with the sysadmin's use of the space. [131008410130] |My personal opinion is that it is a bad idea to have deb packages in either /usr/local or /opt. [131008410140] |I disagree with D4RIO when he says [131008410150] |There is a couchdb package for debian (I mean, official), so if you downloaded another, it's Ok, it must be installed in /opt or /usr/local/bin either. [131008410160] |You don't generally want two different deb packages corresponding to the same software installed, and if they are actually the same package name, dpkg won't allow it anyway. [131008410170] |Unofficial Debian packages of software available as an official package commonly (but not always) have the same name as the official ones; you just install one or the other, not both. [131008410180] |For what it is worth, I think putting deb packages in /opt is a bad idea, and the only recent occurrence of this I've seen is with Google Chrome. [131008410190] |But those Google people tend to be a bit wacky. [131008420010] |httpd running as apache.apache, but logs owned by root.root? [131008420020] |ps shows my httpd processes as [131008420030] |I'm running Centos 5.3 [131008420040] |All the log files in /var/log/httpd are owned by root. [131008420050] |Howcome? [131008430010] |The httpd children run as apache, but the process that spawns them runs as root (as is necessary to bind a privileged port eg. port 80). [131008430020] |Look closely and you'll see an httpd running as root. [131008440010] |How to use 'wine' and AviSynth and Avs2YUV with 'mplayer/mencoder' (or any player/encoder) [131008440020] |I have a lot of AviSynth scripts (.avs) which create a montage of text pics and video, but until yesterday I'd had no luck running AviSynth scripts in 'wine'. [131008440030] |I read about a 'Windows' program called Avs2YUV which, according to wineHQ, is platinum and is "intended for use under Wine to interface between Avisynth and Linux-based video tools". [131008440040] |I've had partial success with a couple of very simple scripts, but "partial" means I don't know how to use it properly, or the AviSynth-Avs2YUV combo doesn't work properly (or both). [131008440050] |Below, are 2 scripts: The first one outputs and saves video only (as intended), but I'd like to know if it is possible to pipe Avs2YUV's stdout directly into a Linux media player... [131008440060] |I've tried a few options, but nothing seems to work. [131008440070] |On the other hand, the saved .264 file does play, so AviSynth and Avs2YUV are doing something right here. [131008440080] |(A quick pre-posting EDIT: I've just corrected a typo where I had put .avi instead of .264, and I realized that I really don't know what x264 does (I'm so used to avi encoding, but I have this feeling that it may be video-only encoder???? ... so I'll mention it now, I have no particular interest in x246.. [131008440090] |It was in the example I followed.. [131008440100] |I just want to produce a playable video+audio .. the wrapper and codecs aren't particularly important to me.. [131008440110] |I'm happy with avi.. actually I prefer it because it works well with AviSynth.. [131008440120] |Catch-22... [131008440130] |The second one behaves very much as the first one, but it produces no audio; which it should. [131008440140] |AviSynth scripts are well known, and are directly playable by many players (in Windows), but with this need to use Avs2YUY, I'm somewhat in unknown territory... [131008440150] |I'd appreciate some pointers on these two issues, and perhaps there is an entirely different way to use AviSynth in wine, other than in conjunction with Avs2YUV... or is the idea of using AviSynth in Linux just a myth? [131008440160] |Here are the scripts: [131008450010] |I've found a reasonable working solution which allows both audio and video to be processed in a (normal) single pass of the AviSynth script... [131008450020] |...avidemux2 + avsproxy to the rescue! [131008450030] |It has some limitations, like not handling DirectShowSource() very well... [131008450040] |DirectShowSource was handy, because it autodetected the type of video/audio, but there are typically other ways around that. [131008450050] |I've done some minor tests, and it has rendered a montage of two text panels (using .AAS subtitles format in unicode), and another panel of a subtitled picture. [131008450060] |It seems to handle simple video without any problems... [131008450070] |I have had to tweak a few minor things, but it seems manageable... [131008450080] |It is certainly functional enough that I'll continue with it, to find it's quirks :) [131008450090] |Both avsproxy and avidemux2 have CLI and GUI interfaces... [131008450100] |If I can get the CLI's to work together, then I'm pretty close to getting an AviSynth to play directly in a media player... avidemux2 can be set to "copy", and the resulting avi output can be piped directly into a player (hopefully)... [131008450110] |It's looking good... [131008460010] |How to write terminal contents into a file [131008460020] |Here's my situation: I open terminal and run program which displays live feed in terminal (text) what changes every second. [131008460030] |Only "Enter" key can be used while this program is running (it exits that program). [131008460040] |So you can't type anything else into console. [131008460050] |I would like to write that terminal contents into a file, like after every second. [131008460060] |How do I do it? [131008460070] |By opening 2nd console and using some command? [131008460080] |Can't get it work with setterm -dump command. [131008470010] |How about running the program like this: [131008470020] |This redirects the output of program to /path/to/file instantly. [131008470030] |And if you want to have the output in your terminal, as well as save it into a file. [131008470040] |Check out Is there a way in bash to redirect output and still have it go to stdout? [131008480010] |You could use GNU screen, along with its logging functionality. [131008480020] |Note also that the logfile flush secs command allows you to control how often the output is flushed to disk. [131008480030] |From the Screen User's Manual: [131008480040] |— Command: logfile flush secs [131008480050] |Defines the name the log files will get. [131008480060] |The default is ‘screenlog.%n’. [131008480070] |The second form changes the number of seconds screen will wait before flushing the logfile buffer to the file-system. [131008480080] |The default value is 10 seconds. [131008490010] |live-f1 redraws the screen with new data by using terminal control characters (ncurses), just like top or mtr. [131008490020] |That's why you see all this junk when redirecting to a file or non terminal device. [131008490030] |Unfortunately, live-f1 doesn't provide an option for getting output appropriate to save and later extract data for statistics and such. [131008490040] |If you still want to save the output for replaying it later, you can use script. [131008490050] |This will record live-f1 and create two files, typescript and timingfile. [131008490060] |This will replay the output [131008500010] |How to redirect TTY1 to an X11 (KDE) Konsole shell? [131008500020] |I would like to see what is going on at TTY1 ( the console i was booting from ) while i am now on TTY7 running X11 with KDE4. without switching to TTY1 (ALT-CTRL-F1) [131008500030] |I would like to see it inside KDE's Konsole, if possible. ( Have TTY1 redirected into Konsole. sort of. including all boot history that was scrolling down while i was booting ) [131008500040] |Is that possible? [131008510010] |Check out ttysnoop: http://www.linuxhelp.net/guides/ttysnoop/ [131008510020] |I think you will not be able to see everything since boot, only everything after you connected to the snoop session. [131008520010] |fold -bw80 /dev/vcs1 [131008520020] |Substitute 80 for your actual width. [131008520030] |This doesn't support attributes, the /dev/vcsa* include attributes. [131008530010] |KDE System Tray Organizer [131008530020] |Is it possible to organize order of icons or lock them in KDE system tray? [131008530030] |It's very confusing with a lot of shuffled application icons. [131008530040] |Edit: Does anybody know where are system tray configuration files? [131008540010] |Remove empty configuration section [131008540020] |Files like ~/.config/vlc/vlcrc are 99% junk if you want to version control only the configuration options. [131008540030] |I've got a script to remove the comments, but there's a ton of empty configuration sections left over. [131008540040] |My sed- and awk-fu is not up to speed, so how can I remove the empty configuration sections? [131008540050] |The first line of a configuration section matches ^\[.*\]$, and it is empty if the first line is followed by any number of lines consisting only of whitespace, then followed by another line matching ^\[.*\]$ or EOF. [131008550010] |Recall section headers as you see them, but don't print them until you see a setting line in that section. [131008550020] |You could do it in sed by storing the section header in the hold space, but it's clearer in awk. [131008560010] |As an alternative to an awk one-liner, you can store an awk script to a file. [131008560020] |Here is a slightly more sophisticated version of the script: [131008560030] |Just save it to a file like conf-filter.awk and mark it executable with chmod +x conf-filter.awk. [131008570010] |To keep only lines with text, I would personally use grep . instead. [131008580010] |1000 iptables entries on CentOS? [131008580020] |I just got a new dedicated server with CentOS and I'm trying to debug some network problems. [131008580030] |In doing so, I found over one thousand iptables entries. [131008580040] |Is this the default on a CentOS system? [131008580050] |Is there some firewall package that might be guilty of doing that? [131008590010] |Is this the default on a CentOS system? [131008590020] |No. [131008590030] |The default one is below. [131008590040] |Is there some firewall package that might be guilty of doing that? [131008590050] |Probably. [131008590060] |You don't say what the entries are but if they're banning CIDR blocks I'd guess your server has a firewall like APF or CSF that can subscribe to blacklist like Spamhaus' DROP and that is how your rules are being generated. [131008590070] |Alternatively, there might be some cron job which does it all. [131008590080] |If you do a grep -rl iptables /etc/* that will tell you all the files that mention iptables and hopefully track down what is generating your entries. [131008590090] |Here's the default iptables from /etc/sysconfig/iptables: [131008600010] |Is there a downside to ksplice? [131008600020] |ksplice is an open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. [131008600030] |(From Wikipedia.) [131008600040] |Is there a downside to using ksplice? [131008600050] |Does it introduce any kind of instability? [131008600060] |If not, why is it not included by default in more Linux distributions? [131008610010] |(A disclaimer: I work for Ksplice.) [131008610020] |Re: "Is there a downside to using Ksplice?" you may find the answer to a similar question over at ServerFault to be useful: http://serverfault.com/questions/78406/is-ksplice-production-ready/ [131008620010] |Technically it's very sound, I think that the fact distributions do provide this method of patching yet is: [131008620020] |
  • It does not integrate with the existing update methods (packaging wise)
  • [131008620030] |
  • It adds to the burden of the distro to provide another method of upgrading.
  • [131008630010] |Is there a way to find which iptables rule was responsible for dropping a packet? [131008630020] |I have a system that came with a firewall already in place. [131008630030] |The firewall consists of over 1000 iptables rules. [131008630040] |One of these rule is dropping packets I don't want dropped. [131008630050] |(I know this because I did iptables-save followed by iptables -F and the application started working.) [131008630060] |There are way too many rules to sort through manually. [131008630070] |Can I do something to show me which rule is dropping the packets? [131008640010] |Run iptables -L -v -n to see the packet and byte counters for every table and for every rule. [131008650010] |You could add a TRACE rule early in the chain to log every rule that the packet traverses. [131008650020] |I would consider using iptables -L -v -n | less to let you search the rules. [131008650030] |I would look port; address; and interface rules that apply. [131008650040] |Given that you have so many rules you are likely running a mostly closed firewall, and are missing a permit rule for the traffic. [131008650050] |How is the firewall built? [131008650060] |It may be easier to look at the builder rules than the built rules. [131008660010] |Since iptables -L -v -n has counters you could do the following. [131008660020] |This way you will see only the rules that incremented. [131008670010] |bash -i behavior and "why is my terminal hung?" [131008670020] |This happens on every distro I've tried: [131008670030] |And now the session is hung. [131008670040] |Ie, you get one command and then you pretty much have to close the terminal window. [131008670050] |SIGTERM (aka ^C) is caught by bash but it won't get you back to a working shell. [131008670060] |I'm guessing it has something to do with the -i causing bash to fight with cat for control of the tty, but I can't find anything definitive. [131008670070] |Can anyone explain what's going on? [131008670080] |And how do I automate input to a program that's running execlp("/bin/bash", "bash", "-i") without hanging? [131008670090] |Do I have to write an expect-like program that handles tty vagaries? [131008680010] |The short answer: yes, you're going to have to use Expect. [131008680020] |As for why it's failing: based on its behavior, bash -i turns on readline (even with --noediting), which plays rather badly with pipes. [131008680030] |It also appears to be setting terminal modes (including non-blocking mode) via stdout instead of stdin, which means you lose the expected behavior of most control characters. [131008680040] |(It is not, however, reading the tty directly.) [131008680050] |Side note: ^C sends SIGINT, not SIGTERM. [131008680060] |Both are, however, trapped in interactive (-i) mode; SIGHUP works to kill it. [131008690010] |Splitting large directory tree into specified-size chunks? [131008690020] |I have a directory tree that I would like to back up to optical disks. [131008690030] |Unfortunately, it exceeds the size of any one disk (it's about 60GB). [131008690040] |I am looking for a script that would split this tree into appropriately sized chunks with hard links or whatnot (leaving the original untouched). [131008690050] |I could then feed these bite-size trees into the backup process (add PAR2 redundancy, etc.). [131008690060] |It's not a fancy script, but it seems like it might have already been done. [131008690070] |Suggestions? [131008690080] |(Spanning and writing in one step is a no-go because I want to do more stuff before the files get burned.) [131008700010] |The rar archiver can be instructed to automatically split the archive it creates up into chunks of a specific size with the -vsize flag. [131008700020] |Archiving that directory tree named foo into chunks of, say, 500 megabytes apiece you'd specify rar a backup.rar -v500m foo/ [131008710010] |I once made an ugly script for a similar purpose. [131008710020] |It is just a kludge, but when I wrote it I didn't care about execution time or prettiness. [131008710030] |I'm sure there are more "productified" versions of the same concept around, but If you wish to get some ideas or something to start hacking on, here goes (did it in 2008, so use at your own risk!) :-) [131008710040] |I think I had the result shared through samba to a windows host that burned discs from it. [131008710050] |If you use the above unaltered, you may wish to use mkisofs or another archiver that resolves symlinks. [131008720010] |I once wrote a script to solve a similar problem -- I called it "distribute" (you can read the main code of the script or the file with the help message, or download it as a package); from its description: [131008720020] |distribute -- Distribute a collection of packages on multiple CDs (especially good for future use with APT) [131008720030] |Description: `distribute' program makes doing the tasks related to creating a CD set for distribution of a collection of packages easier. [131008720040] |The tasks include: laying out the CDs filesystem (splitting the large amount of packages into several discs etc.), preparing the collection for use by APT (indexing), creating ISO images and recording the discs. [131008720050] |Periodical updates to the initially distributed collection can be issued with help of `distribute'. [131008720060] |It does the whole process in several stages: at one stage, it creates the furure disk "layouts" by using symlinks to the original files -- so you can intervene and change the future disk trees. [131008720070] |The details about its usage can be read in the help message printed by the script (or by looking into the source code). [131008720080] |It was written with a more trickier use case in mind (issuing updates as a "diff"--the set of added new files--to the originally recorded collection of files), so it includes one extra initial stage, namely, "fixing" the current state of the collection of files (for simplicity, it does this by replicating the original collection of files by means of symlinks, in a special working place for saving the states of the collection; then, some time in the future, it will be able to create a diff between a future current state of the collection of files and this saved state). [131008720090] |So, although you might not need this feature, you can't skip this initial stage, AFAIR. [131008720100] |Also, I'm not sure now (I wrote it quite a few years ago) whether it treats complex trees well, or it is supposed to split only plain (one level) directories of files. [131008720110] |(Please look into the help message or the source code to be sure; I'll look this up, too, a bit later, when I'll have some time.) [131008720120] |The APT-related stuff is optional, so don't pay attention that it can prepare package collections to be used by APT if you don't need this. [131008720130] |If you get interested, of course, feel free to rewrite it to your needs or suggest improvements. [131008720140] |(Please pay attention that the package includes additional useful patches not applied in the presented code listing at the Git repo linked above!) [131008730010] |backup2l can do a lot of this work. [131008730020] |Even if you don't use the package directly, you might get some script ideas from it. [131008740010] |We shouldn't forget that the essence of the task is indeed quite simple; as put in a tutorial on Haskell (which is written around the working through of the solution for this task, incrementally refined) [131008740020] |Now let's think for a moment about how our program will operate and express it in pseudocode: [131008740030] |Sounds reasonable? [131008740040] |I thought so. [131008740050] |Let's simplify our life a little and assume for now that we will compute directory sizes somewhere outside our program (for example, with "du -sb *") and read this information from stdin. [131008740060] |(from Hitchhikers guide to Haskell, Chapter 1) [131008740070] |(Additionaly, in your question, you'd like to be able to tweak (edit) the resulting disk layouts, and then use a tool to burn them.) [131008740080] |You could re-use (adapt and re-use) a simple variant of the program from that Haskell tutorial for splitting your file collection. [131008740090] |Unfortunately, in the distribute tool that I've mentioned here in another answer, the simplicity of the essential splitting task is not matched by the complexity and bloatedness of the user interface of distribute (because it was written to combine several tasks; although performed in stages, but still combined not in the cleanest way I could think of now). [131008740100] |To help you make some use of its code, here's an excerpt from the bash-code of distribute (at line 380) that serves to do this "essential" task of splitting a collection of files: [131008740110] |(read more after line 454) [131008740120] |Note that the eatFiles function prepares the layouts of the future disks as trees where the leaves are symlinks to the real files. [131008740130] |So, it is meeting your requirement that you should be able to edit the layouts before burning. [131008740140] |The mkisofs utility has an option to follow symlinks, which is indeed employed in the code of my mkiso function. [131008740150] |The presented script (which you can take and rewrite to your needs, of course!) follows the simplest idea: to sum the sizes of files (or, more precisely, packages in the case of distribute) just in the order they were listed, don't do any rearrangements. [131008740160] |The "Hitchhikers guide to Haskell" takes the optimization problem more seriously and suggests program variants that would try to re-arrange the files smartly, in order for them to fit better on disks (and require less disks): [131008740170] |Enough preliminaries already. let's go pack some CDs. [131008740180] |As you might already have recognized, our problem is a classical one. [131008740190] |It is called a "knapsack problem" (google it up, if you don't know already what it is. [131008740200] |There are more than 100000 links). [131008740210] |let's start from the greedy solution... [131008740220] |(read more in Chapter 3 and further.) [131008740230] |

    Other smart tools

    [131008740240] |I've been told also that Debian uses a tool to make its distro CDs that is smarter than my distribute w.r.t. collections of packages: its results are nicer because it cares about inter-package dependencies and would try to make the collection of packages that gets on the first disk closed under dependencies, i.e., no package from the 1st disk should require a package from another disk (or at least, I'd say, the number of such dependencies should be minimized).