[131058800010] |How can I have 400% CPU occupied on 2 cores [131058800020] |I understend that Σ(%CPU) ≤ logicalcores*(1+ε) (where ε is measure and rounding error) but how can I have on 2 core system 2 different processes each taking 200% of CPU (as measured by htop)? [131058800030] |EDIT Cores in above equation means logical cores i.e. taking into account all hyperthreading, number of CPUs etc. [131058800040] |EDIT 2 Despite htop displays the number of processors I attach cpuinfo [131058810010] |Do you have 2 processors or 4? [131058810020] |If you have two cores with two CPUs each through hyperthreading, that's a total of four processors (as far as the OS is concerned). [131058820010] |Two dual-core processors contain 4 CPUs. [131058820020] |Multi-threaded processors also exist. [131058820030] |Each thread appears as a CPU to the OS, although they are not true CPUs as are cores. [131058820040] |Most usually multi-threaded CPUs have two threads per core, so you can have a single, dual-threaded, dual-core processor that appears to your OS at 4 CPUs. [131058820050] |It should be noted that Sun Microsystems had a T2 chip which had 8 cores and 8 threads per core for 64 virtual CPUs on a single processor. [131058830010] |A single physical hyperthreaded processor will appear as two logical processors. [131058830020] |Thus your dual core processor in which each core is hyperthreaded will appear as four logical processors to things like top, cpuinfo, prstat, activity monitor, etc. [131058840010] |Preventing battery from charging [131058840020] |I'm running on UPS power and would like to prevent the laptop's battery from charging, to increase the amount of power available to other devices. [131058840030] |Is there a way to do this? [131058840040] |update: The machine is a Dell Latitude D400. [131058840050] |If people want more details, just ask. [131058840060] |Also, I'm gathering that I need to explain my desired setup a little better. [131058840070] |I've gotten a bunch of suggestions about taking the battery out. [131058840080] |I'm not sure if people are suggesting to take the battery out while the machine is running — this, as I understand, is not a good idea with most laptops — or to just remove the battery altogether. [131058840090] |The latter option is not optimal, because ideally I'd like to use the 30-60 minutes of power in the laptop battery and then switch over to UPS power. [131058840100] |The details of the switch-over may constitute a separate question, but if I can't find a way to keep the laptop battery from charging, then removing the battery from the machine altogether may be the best way to do this. [131058840110] |I'm not sure yet if this machine will run without a battery, but I'll check that out. [131058840120] |Other than the laptop, the UPS is just supporting a cable modem and router and a USB hub. [131058840130] |Again in the idealized version of this setup, all the power management changes would be automated, i.e. not require replugging anything or pressing Fn-keys. [131058840140] |I'd like the machine to start using laptop battery power when apcupsd indicates that the UPS A/C is out, and then start using UPS power, but not charging the battery, when the battery is almost depleted. [131058850010] |There may be. [131058850020] |Some 'proprietary' extensions which allow to operate on battery. [131058850030] |For example tp-smapi patchset allows to set the maximum charging threshold for thinkpads. [131058850040] |Setting it to 0 would prevent it from charging. [131058850050] |Some laptops may not have that possibility in BIOS so you need to post details about hardware to receive any details. [131058860010] |There's no danger in removing the battery as long as you've got line power. [131058860020] |I (used to*) do it all the time. [131058860030] |* I now have a MacBook Pro without a removable battery. [131058870010] |How can a Windows based DHCP server update DNS Server on Linux? [131058870020] |The updates to the zone is protected by the key (similar to: allow-update { key "rndc-key"; }). [131058880010] |The DHCP server never updates the DNS server anyway, at least for ones I've used. [131058880020] |What happens is that the DHCP client obtains the IP address (and potentially hostname) from the DHCP server and then itself tries to update the DNS server. [131058890010] |Simplest way of forwarding all mail from server? [131058890020] |Possible Duplicate: Lightweight outgoing SMTP server [131058890030] |I am looking for a minimal mail solution (MTA) for a headless server which generate e-mails for local users and and fully qualified addresses from cron-jobs, etc. [131058890040] |Ideally all mails to local user foo should be mapped to foo@mydomain, with possible modifications for uid<1024, and sent off to an external smtp server without involving /var/mail. [131058890050] |Some years ago, I used sSMTP for a similar task, and I was wondering if this is still the way ahead? [131058890060] |Also, how much of the default debian mail system should/could I remove? [131058890070] |Update Ended up Googling a bit, and the obvious candidates seem to be [131058890080] |
  • sSMTP: Not actively developed
  • [131058890090] |
  • eSMTP: Not actively developed according to home page
  • [131058890100] |
  • mSMTP: Recommended in front of sSMTP at http://www.scottro.net/qnd/qnd-ssmtp.html
  • [131058890110] |
  • nullmailer: Suggested by Gilles
  • [131058890120] |Even though eSMTP is not developed anymore, it seems to have the nicest documentation. [131058890130] |It doesn't quite fit my needs though, at it seems to insist on delivering mail to local user foo via a Mail Delivery Agent (MDA) instead of pushing it out over smtp to foo@some.domain. [131058890140] |Or maybe it does do the mapping if I add qualify_domain to the config. [131058890150] |Might have to try it out... nullmailer appears to be running a queue in /var, which is not something I want. [131058890160] |Does anybody have experience with any of these? [131058900010] |Nullmailer might fit the bill: [131058900020] |Nullmailer is a mail transport agent designed to only relay all its messages through a fixed set of "upstream" hosts. [131058900030] |If you just install the Debian package, it will replace whatever MTA was there before. [131058910010] |My recommendation would be nullmailer too. [131058910020] |We were in a similar situation before (need to use a second MTA in Zimbra boxes for system-generated mail) and it fits the bill nicely. [131058910030] |However, your requisite of not using /var is a hurdle. [131058910040] |Could you elaborate on the reasons why? [131058910050] |If the problem is using the disk at all, you could mount nullmailer's queue on tmpfs. [131058910060] |It wouldn't be quite safe, but at least it wouldn't touch the disk at all. [131058920010] |IMHO, the easiest way to do this is to install postfix (if debconf asks you questions, pick "internet site"), then run these commands: [131058920020] |As long as the SMTP server specified in relayhost doesn't require authentication, allows you to relay or is a valid destination for the recipient address this will work. [131058920030] |The mailname parameter will be the default domain name. [131058920040] |So if you send mail to foo it will go to foo@my.domainname.com. [131058920050] |Mail sent by user bar will be from bar@my.domainname.com. [131058920060] |If you do need to authenticate and want to support TLS, also run the following commands: [131058920070] |Then in /etc/postfix/sasl_password_maps have this content: [131058920080] |All mail will be sent to your relayhost with the specified username and password. [131058930010] |RSync - Excluding a particular subdirectory and its children where the subdirectory exists at different levels... [131058930020] |G'Day, [131058930030] |I am using rsync to recursively sync a remote folder tree that looks something like the following: [131058930040] |I don't know what the folder tree will look like and it will change over time. [131058930050] |So what I want to be able to do is recursively rsync the above but exclude the folder "cache" and any sub folders it contains so that I ultimately end up syncing: [131058930060] |Any suggestions? [131058940010] |You want the --exclude flag. [131058940020] |For example, a local rsync: [131058940030] |It really is that simple -- that exclude rule will match a directory named "cache" anywhere in your tree. [131058940040] |For more information, look for "--exclude" and the "FILTER RULES" section on the rsync man page: [131058940050] |http://www.samba.org/ftp/rsync/rsync.html [131058950010] |How to select specific processes with 'top'? [131058950020] |In case of one program having multiple instances, running pidof program gives: [131058950030] |top -p accepts comma-delimited arguments: 1, 2, 3. [131058950040] |This means that top -p `pidof program` won't work: [131058950050] |Can you show me how to do this. [131058950060] |I'm not familiar with awk, sed, etc... [131058960010] |sed replaces your with ,: [131058970010] |An alternative to sed for simple things like this is tr: [131058970020] |tr can also easily handle a variable number of spaces: [131058970030] |Additionally, if you have it available, pgrep can work well here: [131058980010] |What's the difference between $(stuff) and `stuff`? [131058980020] |Running top -p $(pidof init) and top -p `pidof init` gives the same output. [131058980030] |Are these two ways of doing one thing, or are there differences? [131058990010] |Obvious difference I observe is that you cannot nest backticks while you can nest $() . [131058990020] |May be both exists for legacy reasons. [131058990030] |Similarly the . command and source commands are synonyms. [131059000010] |The old-style backquotes ` ` do treat backslashes and nesting a bit different. [131059000020] |The new-style $() interprets everything in between ( ) as a command. [131059000030] |works if the nested backquotes are escaped: [131059000040] |backslash fun: [131059000050] |The new-style $() applies to all POSIX-conformant shells. [131059000060] |As mouviciel pointed out, old-style ` ` might be necessary for older shells. [131059000070] |Apart from the technical point of view, the old-style ` ` has also a visual disadvantage: [131059000080] |
  • Hard to notice: I like $(program) better than `program`
  • [131059000090] |
  • Easily confused with a single quote: '`'`''`''`'`''`'
  • [131059000100] |
  • Not so easy to type (maybe not even on the standard layout of the keyboard)
  • [131059000110] |(and SE uses ` ` for own purpose, it was a pain writing this answer :) [131059010010] |$() does not work with old Bourne shell. [131059010020] |But it has been years since I worked with old Bourne shell. [131059020010] |PHP interface to manage linux user accounts [131059020020] |I have a linux web &samba server. [131059020030] |I need to give administrators access to manage minor accounts &groups. [131059020040] |Is there a secure php web interface that can do this? [131059020050] |I don't want to give them shell access, but they need to be able to change passwords, create, &manage accounts\groups. [131059020060] |Thanks! [131059030010] |You could take a look at webmin. [131059030020] |Not sure that it allows fine-grained controls. [131059040010] |Hi, this is more than an interface such as Webmin, but a Ubuntu based distribution with OpenLDAP, Samba, nice GUI with user self service, and lots more.... [131059040020] |Zentayl [131059050010] |Write a remote managing script [131059050020] |I need to be able to locally run a script that will connect to various servers and run commands on them. [131059050030] |What is the best way to accomplish this? [131059060010] |I would use ssh with key authentication, I believe ssh has a way to make sure that certain accounts can only log in from certain IP's so I would limit it to that because you might not want to set a passphrase on the keys (you could use a key manager to avoid that but it has limitations too) [131059070010] |Personally, I would use Capistrano. [131059070020] |It's friendly and written in Ruby and they already did all of the heavy lifting for you. [131059070030] |From Wikipedia: [131059070040] |Capistrano is a utility and framework for executing commands in parallel on multiple remote machines, via SSH. [131059080010] |you can run a command using ssh hostname command. [131059080020] |If you have an entire script you need to execute, first use scp to transfer it to the remote host, then ssh to execute it. [131059090010] |What about using configuration management like puppet or chef? [131059090020] |This is maybe a little over the top for only one script, but if you need several such scripts it might be worth to consider. [131059100010] |I have been pretty happy with a shell script called dssh.sh that utilizes ssh to communicate with many machines simultaneously. [131059100020] |It can execute the same command across lots of machines simultaneously and wait for them all to exit before returning. [131059100030] |To download and learn more about it, the best reference I have found is the BASH Cures Cancer blog. [131059110010] |A quickie bash 'for' loop might be easiest, perhaps something like: [131059110020] |Of course, cfengine/puppet/chef/capistrano are better configuration management options. [131059110030] |If you wanted to interactively send commands to the various shells, clusterm (http://sourceforge.net/projects/clusterm/) is a solid choice too. [131059120010] |Puppet and Chef are "pull" systems and I've found that a complementary "push" system implemented using Capistrano, Fabric, or ssh(1) in a for-loop is necessary. [131059120020] |Of course, that means public keys in place for authentication, too; fortunately, those can be managed by Puppet or Chef. [131059130010] |Adding an empty line at the end of input [131059130020] |I have some command which produces output with no new line at the end, like this [131059130030] |Currently I overcome this by somecmd | sed 's/$/\n/' | tr -s '\n' Is there a better way to do this? [131059140010] |Feed it through some utility which read input in lines and output lines, like in awk { print $0 }. [131059150010] |Just run echo after it, it should generate a newline [131059150020] |And If you need to feed it to something else, run it in a sub-shell: [131059150030] |Or.. as @camh points out, the subshell is actually not needed you can execute it with a command list in the current shell environment with: [131059160010] |Is there a tool that allows logging of memory usage? [131059160020] |I want to monitor memory usage of a process, and I want this data to be logged. [131059160030] |Short of writing my own code, does such a tool exist? [131059170010] |I think this link about programmatically monitoring a process memory usage will be useful for you to resolve your need. [131059180010] |I have written an script to do exactly this: http://jeetworks.org/programs/syrupy. [131059180020] |It basically samples ps at specific intervals, to build up a profile of a particular process. [131059180030] |The process can be launched by the monitoring tool itself, or it can be an independent process (specified by pid or command pattern). [131059190010] |sar (System Activity Reporter) from the sysstat package is your friend in case like these. [131059190020] |Another way would be monitoring combined with historical data, e.g. Munin, pnp4nagios, rrdtools, ... [131059200010] |Besides the aforementioned sar, I'd recommend atop. [131059200020] |It saves a binary log that you can peruse afterwards, and besides memory saves a lot of other information. [131059210010] |Occasionally when the need arises I just do "top -d 1 -b |grep >>somefile" . [131059210020] |Not an elegant solution, but gets the job done if you want the quick crude value to verify your hypothesis. [131059220010] |You could try Valgrind. [131059220020] |Valgrind is an instrumentation framework for building dynamic analysis tools. [131059220030] |There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail. [131059220040] |You can also use Valgrind to build new tools. [131059220050] |The Valgrind distribution currently includes six production-quality tools: a memory error detector, two thread error detectors, a cache and branch-prediction profiler, a call-graph generating cache and branch-prediction profiler, and a heap profiler. [131059230010] |What is the scope of using rsync in sync-ing my target root file system from my host? [131059230020] |I have a target and host setup with Linux as the OS on both. [131059230030] |I have a root file system(RFS) in my target. [131059230040] |I have a setup in which my RFS is in development state so it will undergo changes on the fly. [131059230050] |My requirement is that, every time I make a change, I need to sync my current RFS to my target RFS. [131059230060] |One of my colleague suggested that I can use rsync to attain my task. [131059230070] |Please shed your valuable suggestions on this idea. [131059230080] |All other ideas are also welcomed. [131059240010] |What kind of access do you have to the target? [131059240020] |If you can ssh into it, it's just [131059240030] |Note that you'll need an rsync executable on the target as well. [131059240040] |You'll want to set up some kind of passwordless authentication. [131059240050] |I guess during development you would just allow root so ssh in, and put a public key for root in place. [131059240060] |If the ssh server on the target is OpenSSH, then you need PermitRootLogin Yes in /etc/sshd_config (or /etc/ssh/sshd_config or something), and the root public key would be in /root/.ssh/authorized_keys or /.ssh/authorized_keys depending on where root's home directory is set in /etc/password. [131059240070] |If you changed the bootloader, and perhaps if you changed the kernel, you'll also need to run the bootloader update utility. [131059250010] |How is an Interrupt handled in Linux? [131059250020] |I just know that Interrupt is a hardware signal assertion caused in a processor pin. [131059250030] |But I would like to know how Linux OS handles it. [131059250040] |What all are the things that happen when an interrupt occurs? [131059260010] |Here's a high-level view of the low-level processing. [131059260020] |I'm describing a simple typical architecture, real architectures can be more complex or differ in ways that don't matter at this level of detail. [131059260030] |When an interrupt occurs, the processor looks if interrupts are masked. [131059260040] |If they are, nothing happens until they are unmasked. [131059260050] |When interrupts become unmasked, if there are any pending interrupts, the processor picks one. [131059260060] |Then the processor executes the interrupt by branching to a particular address in memory. [131059260070] |The code at that address is called the interrupt handler. [131059260080] |When the processor branches there, it masks interrupts (so the interrupt handler has exclusive control) and saves the contents of some registers in some place (typically other registers). [131059260090] |The interrupt handler does what it must do, typically by communicating with the peripheral that triggered the interrupt to send or receive data. [131059260100] |If the interrupt was raised by the timer, the handler might trigger the OS scheduler, to switch to a different thread. [131059260110] |When the handler finishes executing, it executes a special return-from-interrupt instruction that restores the saved registers and unmasks interrupts. [131059260120] |The interrupt handler must run quickly, because it's preventing any other interrupt from running. [131059260130] |In the Linux kernel, interrupt processing is divided in two parts: [131059260140] |
  • The “top half” is the interrupt handler. [131059260150] |It does the minimum necessary, typically communicate with the hardware and set a flag somewhere in kernel memory.
  • [131059260160] |
  • The “bottom half” does any other necessary processing, for example copying data into process memory, updating kernel data structures, etc. [131059260170] |It can take its time and even block waiting for some other part of the system since it runs with interrupts enabled.
  • [131059260180] |As usual on this topic, for more information, read Linux Device Drivers; chapter 10 is about interrupts. [131059270010] |Gilles already described the general case of an interrupt, the following applies specifically to Linux 2.6 on an Intel architecture (part of this is also based on Intel's specifications). [131059270020] |An interrupt is an event that changes the sequence of instructions executed by the processor. [131059270030] |There are two different kinds of interrupts: [131059270040] |
  • Synchronous interrupt (Exception) produced by the CPU while processing instructions
  • [131059270050] |
  • Asynchronous interrupt (Interrupt) issued by other hardware devices
  • [131059270060] |Exceptions are caused by programming errors (f.e. Divide error, Page Fault, Overflow) that must be handled by the kernel. [131059270070] |He sends a signal to the program and tries to recover from the error. [131059270080] |The following two exceptions are classified: [131059270090] |
  • Processor-detected exception generated by the CPU while detecting a anomalous condition; divided into three groups: Faults can generally be corrected, Traps report an execution, Aborts are serious errors.
  • [131059270100] |
  • Programmed exception requested by the programmer, handled like a trap.
  • [131059270110] |Interrupts can be issued by I/O devices (keyboard, network adapter, ..), interval timers and (on multiprocessor systems) other CPUs. [131059270120] |When an interrupt occures, the CPU must stop his current instruction and execute the newly arrived interrupt. [131059270130] |He needs to save the old interrupted process state to (probably) resume it after the interrupt is handled. [131059270140] |Handling interrupts is a sensitive task: [131059270150] |
  • Interrupts can occur at any time, the kernel tries to get it out of the way as soon as possible
  • [131059270160] |
  • An interrupt can be interrupted by another interrupt
  • [131059270170] |
  • There are regions in the kernel which must not be interrupted at all
  • [131059270180] |Two different interrupt levels are defined: [131059270190] |
  • Maskable interrupts issued by I/O devices; can be in two states, masked or unmasked. [131059270200] |Only unmasked interrupts are getting processed.
  • [131059270210] |
  • Nonmaskable interrupts; critical malfunctions (f.e. hardware failure); always processed by the CPU.
  • [131059270220] |Every hardware device has it's own Interrupt Request (IRQ) line. [131059270230] |The IRQs are numbered starting from 0. [131059270240] |All IRQ lines are connected to a Programmable Interrupt Controller (PIC). [131059270250] |The PIC listens on IRQs and assigns them to the CPU. [131059270260] |It is also possible to disable a specific IRQ line. [131059270270] |Modern multiprocessing Linux systems generally include the newer Advanced PIC (APIC), which distributes IRQ requests equally between the CPUs. [131059270280] |The mid-step between an interrupt or exception and the handling of it is the Interrupt Descriptor Table (IDT). [131059270290] |This table associates each interrupt or exception vector (a number) with a specified handler (f.e. Divide error gets handled by the function divide_error()). [131059270300] |Through the IDT, the kernel knows exactly how to handle the occurred interrupt or exception. [131059270310] |So, what does the kernel when an interrupt occurres? [131059270320] |
  • The CPU checks after each instruction if there's a IRQ from the (A)PIC
  • [131059270330] |
  • If so, consults the IDT to map the received vector to a function
  • [131059270340] |
  • Checks if the interrupt was issued by a authorized source
  • [131059270350] |
  • Saves the registers of the interrupted process
  • [131059270360] |
  • Call the according function to handle the interrupt
  • [131059270370] |
  • Load the recently saved registers of the interrupted process and try to resume it
  • [131059280010] |http://tldp.org/LDP/tlk/dd/interrupts.html explains everything about the question u have asked [131059290010] |How to delete a file in Arch console? [131059290020] |pacman stopped working and asked me to delete a file if "another package manager isn't working". [131059290030] |What is the file for deleting files? [131059300010] |You delete files with the rm command, e.g.: [131059300020] |rm /tmp/pacman.lck [131059300030] |I hate to be "that guy", but if you don't know how to delete a file from the Linux command line, Arch is not the Linux distribution for you. [131059300040] |Try something easier, like Ubuntu or Linux Mint first. [131059310010] |If pacman is complaining there is ususally a good reason; don't ignore complaints unless you find good reason to, e.g. if it says "it's okay to ignore the error". [131059310020] |Good practice would be to check your logs first and foremost. [131059310030] |A lot of the logs live in /var/log/. [131059310040] |This will show the tail end or last couple of lines from everything.log; which contains most errors across the system as they happen: [131059310050] |It's also worth seriously noting that running two package managers/instances of pacman at the same time is not a good idea. [131059310060] |Did you start one then try to start another? [131059310070] |It may be waiting for your input: [131059310080] |Then you try to use `pacman to install something else without answering and get: [131059310090] |The one thing that you should learn early is that deleting anything should be your absolute last resort, in any situation with linux. [131059310100] |There is no 'trash bin' with the commandline, delete really means its gone. [131059320010] |What protocol/standard is used by terminals? [131059320020] |I was wondering how the "GUI" of a command line application is communicated over a network. [131059320030] |Most of the time, it's quite simple (plain text / input) but sometimes it's more complex (aptitude). [131059320040] |Is it defined by some sort of standard so that anyone can write their own terminal and that all terminal implementations behave in the same way (colors, positioning, etc.)? [131059330010] |Console programs typically use curses or one of its successors to build the sorts of text user interfaces you're talking about. [131059330020] |(There are other libraries of this sort, but it's not important to list them all here.) [131059330030] |These libraries use one of two databases, called termcap and terminfo. [131059330040] |(This is one of the many BSD vs. AT&T differences you still find in modern Unix systems.) [131059330050] |These databases contain maps that tell how to control the many terminal types. [131059330060] |The vast majority of the terminal types you'll find defined in these databases didn't survive the days of real terminals, and so are now only of historical interest. [131059330070] |What's survived, and are used by programs like minicom and GUI "terminal" programs like xterm, GNOME Terminal, the OS X Terminal, etc., are a few common standards: [131059330080] |
  • ANSI X3.64: A standard for controlling "glass terminals" —as opposed to teletypes —it is based on special sequences of characters which the remote terminal interprets. [131059330090] |For instance, if the Unix box wants to tell the terminal to move its cursor to the upper left corner of the screen, it sends the characters ESC [ 1 ; 1 H. [131059330100] |The first two characters tell the terminal to expect a control sequence, the 1s are the row and column, and H is the command meaning "move cursor". [131059330110] |Unix programs don't embed these escape sequences, it's all handled by the libraries mentioned above. [131059330120] |Trivia: many PC BBSes used ANSI codes, too. [131059330130] |(Still do, actually.)
  • [131059330140] |
  • DEC VT100: A variant of the ANSI terminal standard, it became popular enough that it's still supported by most terminal programs in some form. [131059330150] |Sometimes you see this called the VT102 protocol, that being a later cost-reduced —and therefore more popular —version of the same terminal design. [131059330160] |Usually terminal emulators actually support one of the extensions to this protocol, introduced with newer DEC terminals. [131059330170] |The most common are the VT220 and VT320 protocols. [131059330180] |Each of these later terminal types has more capabilities, so the driving factor determining which terminal types a program can emulate is how capable that program is. [131059330190] |These protocols are a backwards-compatible series, so sometimes you'll have a terminal emulator that says it supports VT320, but is missing a few things, yet is still useful because it supports enough of that protocol to work with your programs. [131059330200] |If you only use the VT220 subset of the protocol, that terminal emulator may be fully functional even though you've told curses (or whatever) that you want programs using it to use the VT320 protocol.
  • [131059330210] |
  • xterm: A kind of amalgam of ANSI and the VT-whatever standards. [131059330220] |Whenever you're using a GUI terminal emulator like xterm or one of its derivatives, you're usually also using the xterm terminal protocol, typically the more modern xterm-color variant.
  • [131059330230] |
  • Linux: The Linux console also uses an extended variant of the ANSI terminal protocol, in the same spirit as the xterm protocols. [131059330240] |Most of its extensions have to do with the differences between a PC and a glass terminal. [131059330250] |For example, the IBM keyboard has some keys not on a DEC VT-whatever. [131059330260] |(And vice versa.) [131059330270] |Like Linux, some Unix systems have their own console terminal protocol, too. [131059330280] |There's the scoansi ANSI X3.64 variant for SCO Unixes, for example.
  • [131059330290] |
  • Wyse: A company, still around, that got started making glass terminals back in the minicomputer days. [131059330300] |Although Wyse terminals were able to emulate the VT100 and other popular terminal protocols, they also had their own native codes. [131059330310] |I mention it only because you still sometimes see programs that know these codes.
  • [131059330320] |You can find out which terminal standard you're asking libraries like curses to use by looking at the TERM environment variable: [131059330330] |When you ssh to another system, the TERM variable is carried along so the remote Unix box knows how to communicate with your local terminal. [131059330340] |Because so many of these protocols are ANSI X3.64 variants, and because the ubiquitous ASCII and UTF-8 character coding standards take care of so much else, an incorrect TERM variable isn't typically catastrophic. [131059330350] |Things that tend to break are extended keys like Home and Page Up, Alt-whatever key combinations, and typographical display features like color, boldface, etc. [131059340010] |Is it possible to use ssh-copy-id once password based logins are disabled? [131059340020] |Is it possible to use ssh-copy-id once password based logins are disabled? [131059340030] |Otherwise, is there an utility for adding an authorized public key to a user or should I edit the ~/.ssh/authorized_keys file manually? [131059350010] |If you have multiple ssh keys and already have one set up for authentication on the server or if you have some other non-password authentication system in place, I believe you can use ssh-copy-id to copy a ssh identity to the server. [131059350020] |In the ssh-copy-id manual page it says: [131059350030] |ssh-copy-id is a script that uses ssh to log into a remote machine (presumably using a login password, so password authentication should be enabled, unless you've done some clever use of multiple identities) [131059350040] |ssh-copy-id uses ssh to log into the remote machine, so if you can currently log into the machine, you can use ssh-copy-id. [131059350050] |If you can not log into the machine, then you can not use ssh-copy-id. [131059350060] |New id's will be appended to ~/.ssh/authorized_keys. [131059360010] |Correlating /var/log/* timestamps [131059360020] |/var/log/messages, /var/log/syslog, and some other log files use a timestamp which contains an absolute time, like Jan 13 14:13:10. [131059360030] |/var/log/Xorg.0.log and /var/log/dmesg, as well as the output of $ dmesg, use a format that looks like [131059360040] |I'm guessing/gathering that the numbers represent seconds and microseconds since startup. [131059360050] |However, my attempt to correlate these two sets of timestamps (using the output from uptime) gave a discrepancy of about 5000 seconds. [131059360060] |This is roughly the amount of time my computer was suspended for. [131059360070] |Is there a convenient way to map the numeric timestamps used by dmesg and Xorg into absolute timestamps? [131059360080] |

    update

    [131059360090] |As a preliminary step towards getting this figured out, and also to hopefully make my question a bit more clear, I've written a Python script to parse /var/log/syslog and output the time skew. [131059360100] |On my machine, running ubuntu 10.10, that file contains numerous kernel-originated lines which are stamped both with the dmesg timestamp and the syslog timestamp. [131059360110] |The script outputs a line for each line in that file which contains a kernel timestamp. [131059360120] |

    Usage:

    [131059360130] |

    Expurgated output (see below for column definitions):

    [131059360140] |... rel_offset is 0 for all intervening lines ... [131059360150] |... rel_offset is -5280 for all remaining lines ... [131059360160] |... [131059360170] |The final lines are from a bit further down, still well above the end of the output. [131059360180] |Some of them presumably got written to dmesg's circular buffer before the suspend happened, and were only propagated to syslog afterwards. [131059360190] |This explains why all of them have the same syslog timestamp. [131059360200] |

    Column definitions:

    [131059360210] |abs is the time logged by syslog. [131059360220] |abs_since_boot is that same time in seconds since system startup, based on the contents of /proc/uptime and the value of time.time(). [131059360230] |rel_time is the kernel timestamp. [131059360240] |rel_offset is the difference between abs_since_boot and rel_time. [131059360250] |I'm rounding this to the tens of seconds so as to avoid one-off errors due to the absolute (i.e. syslog-generated) timestamps only having seconds precision. [131059360260] |That's actually not the right way to do it, since it really (I think..) just results in a smaller chance of having an off-by-10 error. [131059360270] |If somebody has a better idea, please let me know. [131059360280] |I also have some questions about syslog's date format; in particular, I'm wondering if a year ever shows up in it. [131059360290] |I'm guessing no, and in any case could most likely help myself to that information in TFM, but if somebody happens to know it would be useful. ..Assuming, of course, that someone uses this script at some point in the future, instead of just busting out a couple of lines of Perl code. [131059360300] |

    Next:

    [131059360310] |So unless some welcome revelation is unto me given by one of You, my next step will be to add a function to get the time skew for a given kernel timestamp. [131059360320] |I should be able to feed the script one or a set of syslogs, along with a kernel timestamp, to get an absolute timestamp. [131059360330] |Then I can get back to debugging my Xorg issues, which escape me at the moment. [131059370010] |Interesting problem, Not sure I've ever tried to do this. [131059370020] |But I have notice the timestamp you are talking about and I have always belived it to be seconds since bootup. [131059370030] |In my syslog I have on my server, I have: [131059370040] |I would imagine this is fairly consistent among most Linux distro's as this is the kernel spitting out it's stuff. [131059370050] |And here I have the date along with the timestamp. [131059380010] |Your answer lies in the article Timestamping Linux kernel printk output in dmesg for fun and profit, by Jeff Schroeder. [131059390010] |How can I tell what version of OS X I'm on from the command line? [131059390020] |I frequently find myself SSHing into various OS X machines, and it'd be useful if I could tell what version of OS X I was on when I'm doing that. uname -a doesn't quite work, since Darwin kernel versions don't always change with the rest of the system. [131059400010] |Here is a Blog article with instructions How to Get the Mac OS X Version in a Shell Script [131059410010] |Try this: [131059420010] |The answer that suggested "system_profiler | grep 'System Version'" is what I have tried to use in the past, but it has 2 problems. [131059420020] |
  • It is slow since it generates a full system_profiler dump of the machine, gathering all hardware and software inventory information.
  • [131059420030] |
  • The output of system_profiler has changed over time. e.g. output of grep for 'Serial Number' on 10.6.4 is "Serial Number (system): ZNNNNNZNZZZ", whereas on 10.4.11 it was "Serial Number: ZNNNNZNZZZZ" - importance being the parse-ability of the output and the add " (system)" piece can be problematic unless you are expecting the change.
  • [131059420040] |My suggestion is to use "sw_vers". [131059420050] |Example output as of 10.6.4: [131059430010] |The easiest way is: [131059430020] |From http://tinyapps.org/blog/mac/201008140700_os_x_version_terminal.html: [131059430030] |Especially handy when resetting a password in single user mode, since the method varies based on which version of OS X is running. [131059440010] |Open Terminal.app [131059450010] |Can the focus of a Microsoft LifeCam HD-5000 be fixed at infinity under linux? [131059450020] |I have two Microsoft LifeCam HD-5000 webcams which I am using as a home security system with the Motion package for linux. [131059450030] |One problem I have is that because these webcams have auto-focus, sometimes they will try to automatically adjust the focus and will create false positive motion detection. [131059450040] |It seems like you can fix the focus at infinity under windows. [131059450050] |Is there any way I could do that under linux, using uvcvideo? [131059460010] |Perhaps this works: [131059470010] |Is there an equivalent of GNU Screen's "log" command in tmux? [131059470020] |I make heavy use of screen's "log" command to log the output of a session to a file, when I am making changes in a given environment. [131059470030] |I searched through tmux's man page, but couldn't find an equivalent. [131059470040] |Is anyone aware of a similar feature in tmux, or do I have to write my own wrapper scripts to do this? [131059470050] |EDIT: I'm aware of 'script' and other utilities that allow me to log a session. [131059470060] |The reason that screen's functionality is so useful is the ability to define a logfile variable which uses string escapes to uniquely identify each session. [131059470070] |e.g. [131059470080] |I have a shell function which, given a hostname, will SSH to that host in a new screen window and set the window title to the hostname. [131059470090] |When I start a log of that session, it is prefixed with the window title. [131059470100] |If this functionality doesn't exist in tmux, I'll have to create a new set of shell functions to set up 'scripts' of sessions I want to log. [131059470110] |This isn't hugely difficult, but it may not be worth the effort given that screen does exactly what I need already. [131059480010] |After looking through the documentation for tmux, I can't find any equivalent of screen's window logging. [131059480020] |It looks like you'd have to use your shell functions to do what you'd like, or just use screen. [131059480030] |You can turn on debugging, which logs both the server and client side, but it also includes a lot of extraneous tmux-related logs as well, so it wouldn't exactly what you're asking for. [131059480040] |You could possibly use tmux's clipboard to automate saving the buffer to another session, which would be set up to accept the contents of the clipboard and save to a file. [131059480050] |This seems kind of hackish. [131059490010] |i do it using script, this is from my tmux.conf file [131059490020] |bind ^C new-window "script -f /home/jcosta/mydocs/work/logs/$(date '+%d%m%Y_%H%M%S')_$$.log" [131059490030] |bind c new-window "script -f /home/jcosta/mydocs/work/logs/$( date '+%d%m%Y_%H%M%S')_$$.log" [131059490040] |bind | split-window "script -f /home/jcosta/mydocs/work/logs/$(date '+%d%m%Y_%H%M%S')_$$.log" [131059500010] |Let me see if I have deciphered your screen configuration correctly: [131059500020] |
  • You use something like logfile "%t-screen.log" (probably in a .screenrc file) to configure the name of the log file that will be started later.
  • [131059500030] |
  • You use the title (C-a A) screen command to set the title of a new window, or you do screen -t ssh0 to start a new screen session.
  • [131059500040] |
  • You use the C-a H (C-a :log) screen command to toggle logging to the configured file.
  • [131059500050] |If so, then is nearly equivalent (requires tmux 1.3+ to support #W in the pipe-pane shell command; pipe-pane is available in tmux 1.0+): [131059500060] |
  • In a configuration file (e.g. .tmux.conf): [131059500070] |
  • Use tmux rename-window (C-b ,) to rename an existing window, or use tmux new-window -n 'ssh ' to start a new tmux window, or use tmux new-session -n 'ssh ' to start a new tmux session.
  • [131059500080] |
  • Use C-b H to toggle the logging.
  • [131059500090] |There is no notification that the log has been toggled, but you could add one if you wanted: [131059500100] |Note: The above line is shown as if it were in a configuration file (either .tmux.conf or one you source). tmux needs to see both the backslash and the semicolon; if you want to configure this from the a shell (e.g. tmux bind-key …), then you will have to escape or quote both characters appropriately so that they are delivered to tmux intact. [131059500110] |There does not seem to be a convenient way to show different messages for toggling on/off when using only a single binding (you might be able to rig something up with if-shell, but it would probably be ugly). [131059500120] |If two bindings are acceptable, then try this: [131059510010] |How would one permanently set names for network inteface (wifi/ethernet) in arch [131059510020] |Seems every reboot the names reverse different nics( wifi/ethernet ).. [131059520010] |Use udev. [131059520020] |In Gentoo and many other distributions it is done automatically but you may want to base on that if you want quick'n'diry solution: [131059520030] |Did you changed any udev configuration? [131059530010] |Expansion of the word UNIX? [131059530020] |Is UNIX an acronym? [131059530030] |What does it stand for? [131059540010] |Despite often being written in all caps, UNIX is not an acronym, therefore it doesn't have a full expansion. [131059540020] |The name is a play on Multics which was an acronym (Multiplexed Information and Computing Service). [131059540030] |That was an other early operating system around at the time of Unix creation. [131059540040] |Edit: As Marc stated it was originaly called Unics but once it could support multiple users it was renamed Unix which is not an acronym. [131059550010] |First, there was UNICS for Uniplexed Information Computing System. [131059550020] |Then the name changed for UNIX. [131059550030] |Same pronunciation. [131059560010] |Why do I need ntfs-3g when I have already enabled NTFS support in the kernel? [131059560020] |When configuring the kernel I see an option to add read-write support for NTFS. [131059560030] |Then when mounting my NTFS partition I still have to install ntfs-3g and pass ntfs-3g as the type. [131059560040] |I thought if I add NTFS support in the kernel then I wouldn't have to install a library for it. [131059560050] |Why is it so? [131059570010] |The kernel driver is still read only and has no full write support yet, only with many restrictions. [131059580010] |Why is screen seemingly doing nothing with commands passed with -X? [131059580020] |I've been trying to set up an automated backup system for a minecraft server, and I'm having trouble with screen, specifically when using 'screen -r sessionname -X "/var/minecraft/somebatchfile"', nothing happens. [131059580030] |My process flow is somewhat like this at the moment: [131059580040] |screen -m -d -S minecraft /var/minecraft/bin/server_nogui.sh [131059580050] |This starts the minecraft server without any trouble. [131059580060] |However, the issue is that even simple followups like this fail: [131059580070] |screen -r minecraft -X "stop" [131059580080] |I get no error message or success message, and the server does not actually disconnect clients and shut down, like it should. [131059580090] |I assume I'm doing something wrong, but I don't know what. [131059580100] |Is there some obvious mistake I'm making? [131059580110] |I've read the man page a bit but I'm having no luck figuring it out myself. [131059590010] |You have to give the parameter -X a screen command, I think you want to "stuff" a minecraft-server command to the screen session. [131059590020] |The echo send a carriage return, so the command "stop" gets executed. [131059590030] |For sending it over ssh you have to enclosure the command in " " (you could also use ` `, but that wouldn't let you do the command substitution). [131059590040] |Beware that ! is a reserved word, you have to escape it. [131059590050] |It is also possible to include a user generated newline into the command to execute it: [131059590060] |Escaping ! isn't necessary here. [131059600010] |startx results in blank screen. [131059600020] |I just installed Xorg on Arch Linux but when I run startx, I only get a blank screen. [131059600030] |What could be the problem here? [131059610010] |Processing control characters [131059610020] |I have a log file which contains a bunch of non visible control characters such as hex \u0003. [131059610030] |I would like to replace this using something like SED, but can't get the first part of the regex to match: [131059610040] |/s/^E/some_string [131059610050] |I am creating the ^E by pressing CTRL-V CTRL-0 CTRL-3 to create the special character, as read from the 'man ascii' page: [131059610060] |003 3 03 ETX [131059610070] |However, nothing matches this control character. [131059610080] |Any help appreciated! [131059620010] |This perl one-liner will do the job - beware, it will modify the file: [131059620020] |If you want to replace a number of characters with character codes between a specified range: [131059620030] |(echo {A..Z} produces a string of alphabetic characters in bash) [131059630010] |You can also use the tr command. [131059630020] |For example: [131059630030] |To delete the control character: [131059630040] |To replace the control character with another: [131059630050] |If you are not sure what the value of the control character is, perform an octal dump and it will print it out: [131059630060] |So the value of control character ^[ is \033. [131059640010] |This will replace all non-printable characters with a # [131059650010] |I'm not sure if I understand what you want, but if it is to substitute for occurrences of the successive hex bytes 0x00 0x03, this should work: [131059660010] |using cron to run script [131059660020] |Hey guys, im trying to run a script using cron, im using a crontab created by the user ashtanga, in the crontab i have [131059660030] |in top of the script i have: [131059660040] |and the user ashtanga does have executable permission to the file, but cron is not running the script, its giving me the error: [131059660050] |so my question is, how can i get cron to run the script ? [131059670010] |The user ashtanga doesn't have access to /home/custom-django-projects/SiteMonitor/sender.py. [131059670020] |This looks like another user's home area? [131059670030] |Try running the script as ashtanga. [131059670040] |It's always a good first step, before you add anything to cron. [131059670050] |It might be to do with your cron environment. [131059670060] |Take a look at this Cron FAQ: It works from the command line but not in crontab [131059680010] |The user does have permission as the permission is set to 755 The problem is that the user doesn't know of the environment variables needed. [131059680020] |Try using bash instead and see if it picks them up then. [131059680030] |Otherwise, set them up manually [131059680040] |Start troubleshooting by running the script using the /bin/sh shell. [131059680050] |You should get the same error then. [131059690010] |Redhat Enterprise Linux 4.7 - Ping problem [131059690020] |When I try to ping some of the IP addresses I get "connect: No buffer space is available" error. [131059690030] |Also when I check /var/log/messages, there are some errors like as following [131059690040] |do you have any ideas about this problem? [131059700010] |There is a related thread here and a possible solution here. [131059700020] |Basically this happens because the interface "neighbour" table fills up when too many different addresses connect to your server within a short interval. [131059700030] |Increasing the table size and using more aggressive garbage collection for its contents resolves this issue. [131059710010] |How do I recursively check permissions in reverse? [131059710020] |There's a command, I think it comes with apache, or is somehow related to it, that checks permissions, all the way down. [131059710030] |So if I have /home/foo/bar/baz it will tell me what the permissions are for baz, bar, foo, and home. [131059710040] |Does anyone know what this command is or another way of doing this? [131059710050] |The command basically starts at the argument, and works it's way up to / letting you know what the permissions are along the way so you can see if you have a permission problem. [131059720010] |I'm not aware of any commands, but it is quite easy to write a script: [131059720020] |Example: [131059730010] |How about a recursive bash function for a fun solution: [131059740010] |This could easily be made a one-liner. [131059740020] |This is not recursive and should be a relatively fast way of doing this in bash. [131059740030] |Calling pwd in each loop isn't particularly fast, so avoid if you can. [131059740040] |Alternative, a one-liner for the current directory. [131059750010] |The utility you may be thinking of is the namei command. [131059750020] |According to the manual page: [131059750030] |Namei uses its arguments as pathnames to any type of Unix file (symlinks, files, directories, and so forth). [131059750040] |Namei then follows each pathname until a terminal point is found (a file, directory, char device, etc). [131059750050] |If it finds a symbolic link, we show the link, and start following it, indenting the output to show the context. [131059750060] |The output you desire can be received as follows: [131059750070] |The namei command is part of the linux-util-ng software package. [131059750080] |See the manual page for more details. [131059760010] |Open a file given by the result of a command in vim [131059760020] |I find myself doing the following almost every day [131059760030] |
  • running a find (find -name somefile.txt)
  • [131059760040] |
  • opening the result in vim
  • [131059760050] |The problem is I have to copy paste the result of the find into the vim command. [131059760060] |Is there any way to avoid having to do this. [131059760070] |I have experimented a bit (find -name somefile.txt | vim) but haven't found anything that works [131059760080] |Thanks in advance [131059770010] |You can use command substitution: [131059770020] |or [131059780010] |Most of the commands can take - [131059780020] |as a file name which means stdin. [131059790010] |Try this: [131059790020] |
  • start vim
  • [131059790030] |
  • in vim, use the following:
  • [131059790040] |:r!find / -name 'expression' [131059790050] |The results should appear in vim when the search is complete. [131059790060] |Or [131059790070] |Try: [131059800010] |I like to to use the back ticks ` (Its on the same key as the ~) [131059800020] |The back ticks executes the command inside the ticks and the output can then be used by the command. [131059800030] |The above will find all files somefile.txt thus allowing you to use :next to move through all the files. [131059800040] |Its very usefull if you spend a couple of tries refining the command, because you can then use history substitution to repeat the command for the editor. [131059810010] |If you don't mind running the command again: press Up and append an xargs command. [131059810020] |Or use history substitution and run [131059810030] |There's a lightweight way of saving the output of a command that works in ksh and zsh but not in bash (it requires the output side of a pipeline to be executed in the parent shell). [131059810040] |Pipe the command into the function K (zsh definition below), which keeps its output in the variable $K. [131059810050] |Automatically saving the output of each command is not really possible with the shell alone, you need to run the command in an emulated terminal. [131059810060] |You can do it by running inside script (a BSD utility, but available on most unices including Linux and Solaris), which saves all output of your session through a file (there's still a bit of effort needed to reliably detect the last prompt in the typescript). [131059820010] |What distribution-maintained cross-compile toolchain packages exist? [131059820020] |I have just learned of Gentoo's sys-devel/crossdev package. [131059820030] |This is a package that is useful for creating a cross-compiling toolchain. [131059820040] |Are there any other such packages out there on other distributions? [131059820050] |I'm specifically interested in distro-maintained packages because I've tried a couple of others (buildroot, crosstool) and it seems that and any time the distribution touches gcc or binutils, it invariably breaks at least the building of the toolchain if not the building of the project itself. [131059830010] |On Debian, there are apt-cross and dpkg-cross from Emdebian, which let you set up cross-compilation for many architectures (you get cross-compilers and libraries). [131059830020] |On Ubuntu, there's a crosschain for ARM, and a project to improve on this. [131059830030] |You can also create toolchain using crosstool-ng which is not link to a distribution. [131059840010] |Remove files, which provided by pipe [131059840020] |I have this command chain: [131059840030] |This provide me this output: [131059840040] |This files are infected php files and I would like to remove it with my chain. [131059840050] |How can I do it? [131059850010] |Although you can probably do this whole thing with find command only you can try appending |xargs rm -f to that command. [131059850020] |Here's what it would look like [131059850030] |Note that the xargs rm command works here because you know there aren't any special characters in the file names. [131059850040] |If there might be spaces in the file names, you can use xargs -d '\n' rm -f (Linux only). [131059860010] |Add applications to launcher [131059860020] |I recently installed Matlab in /usr/matlab. [131059860030] |What do I do to make it appear in the application launcher on the top left of the taskbar? [131059870010] |You can add a launcher to the panel by right clicking on a free area on the panel and selecting "Add to panel" and then "Custom Application Launcher" (or if the application is already present in the applications menu, you can select "Application Launcher" and then select the application from the menu). [131059870020] |You can add an entry into the applications menu by right clicking on it and selecting "Edit menu". [131059880010] |What are the pros/cons of Upstart and systemd? [131059880020] |It appears systemd is the hot new init system on the block, same as Upstart was a few years ago. [131059880030] |What are the pros/cons for each? [131059880040] |Also, how does each compare to other init systems? [131059890010] |this pretty much sums up everything: [131059890020] |http://ubuntuforums.org/archive/index.php/t-1595983.html [131059900010] |Saw systemd mentioned on Arch General ML today. [131059900020] |So read up on it. [131059900030] |The H Online as ever is a great source for Linux Technology and is where I found my place to start researching Systemd as SysV Init and Upstart alternative. [131059900040] |However the H Online article (in this case) isn't a very useful read, the real use behind it is it gives links to the useful reads. [131059900050] |The real answer is in the announcement of systemd. [131059900060] |Which gives some crucial points of what's wrong with SysV initd, and what new systems need to do [131059900070] |
  • To start less.
  • [131059900080] |
  • And to start more in parallel.
  • [131059900090] |It's major plan to do this seems to be to start services only as they're needed, and to start a socket for that service, so that the service that needs it can connect to the created socket long before the daemon is fully online. [131059900100] |Apparently a socket will retain a small amount of buffered data meaning that no data will be lost during the lag, it will be handled as soon as the daemon is online. [131059900110] |Another part of the plan seems to be to not serialize filesystems, but instead mount those on demand as well, that way you're not waiting on your /home/, etc (not to be confused with /etc) to mount, and/or fsck when you could be starting daemons as / and /var/ etc, are already mounted. [131059900120] |It said it was going to use autofs to this end. [131059900130] |It also has the goal of creating .desktop style init descriptors as a replacement for scripts. [131059900140] |This will prevent tons of slow sh processes and even more forks of processes from things like sed and grep that are often used in shell scripts. [131059900150] |They also plan not to start some services until they are asked for, and perhaps even shut them off if they are no longer needed, bluetooth module, and daemon are only needed when you're using a bluetooth device for example. [131059900160] |Another example given is the ssh daemon. [131059900170] |This is the kind of thing that inetd is capable of. Personally I'm not sure I like this, as it might mean latency when I do need them, and in the case of ssh I think it means a possible security vulnerability, if my inetd were compromised the whole system would be. [131059900180] |However, I've been informed that using this to breach this system is infeasible and that if I want to I can disable this feature per service and in other ways. [131059900190] |Another feature is apparently going to be the capability to start based on time events, either at a regularly scheduled interval or at a certain time. [131059900200] |This is similar to what crond and atd do now. [131059900210] |Though I was told it will not support user "cron". [131059900220] |Personally this sounds like the most pointless thing. [131059900230] |I think this was written/thought up by people who don't work in multiuser environments, there isn't much purpose to user cron if you're the only user on the system, other than not running as root. [131059900240] |I work on multiuser systems daily, and the rule is always run user scripts as the user. [131059900250] |But maybe I don't have the foresight they do, and it will in no way make it so that I can't run crond or atd, so it doesn't hurt anyone but the developers I suppose. [131059900260] |The big disadvantage of systemd is that some daemons will have to be modified in order to take full advantage of it. [131059900270] |They'll work now, but they'd work better if they were written specifically for its socket model. [131059900280] |It seems for the most part the systemd's peoples problem with upstart is the event system, and that they believe it to not make sense or be unnecessary. [131059900290] |Perhaps their words put it best. [131059900300] |Or to put it simpler: the fact that the user just started D-Bus is in no way an indication that NetworkManager should be started too (but this is what Upstart would do). [131059900310] |It's right the other way round: when the user asks for NetworkManager, that is definitely an indication that D-Bus should be started too (which is certainly what most users would expect, right?). [131059900320] |A good init system should start only what is needed, and that on-demand. [131059900330] |Either lazily or parallelized and in advance. [131059900340] |However it should not start more than necessary, particularly not everything installed that could use that service. [131059900350] |As I've already said this is discussed much more comprehensively in the announcement of systemd. [131059910010] |Both upstart and systemd are attempts to solve some of the problems with the limitations of the traditional SysV init system. [131059910020] |For example, some services need to start after other services (for example, you can't mount NFS filesystems until the network is running), but the only way in SysV to handle that is to set the links in the rc#.d directory such that one is before the other. [131059910030] |Add to that, you might need to re-number everything later when dependencies are added or changed. [131059910040] |Upstart and Systemd have more intelligent settings for defining requirements. [131059910050] |Also, there's the issue with the fact that everything is a shell script of some sort, and not everyone writes the best init scripts. [131059910060] |That also impacts the speed of the startup. [131059910070] |Some of the advantages of systemd I can see: [131059910080] |
  • Every process started gets its own cgroup or a particular cgroup.
  • [131059910090] |
  • Pre-creation of sockets and file handles for services, similar to how xinetd does for it's services, allowing dependent services to start faster. [131059910100] |For example, systemd will hold open the filehandle for /dev/log for syslog, and subsequent services that send to /dev/log will have their messages buffered until syslogd is ready to take over.
  • [131059910110] |
  • Fewer processes run to actually start a service. [131059910120] |This means you aren't writing a shell script to start up your service. [131059910130] |This can be a speed improvement, and (IMO) something easier to set up in the first place.
  • [131059910140] |Disadvantages: - Written by the same guy who wrote PulseAudio. [131059910150] |Some people don't like that. [131059910160] |edit Another disadvantage is that to take advantage of systemd's socket/FH preallocation, many daemons will have to be patched to have the FH passed to them by systemd. [131059920010] |Well one thing most of you forgot is the organisation of processes in cgroups. [131059920020] |So if Systemd started a thing it will put this thing in its own cgroup and there is no (unpriviledged) mean for the process to escape that cgroup. [131059920030] |This has several things as consequence: [131059920040] |
  • An Administator of a big System with many users has efficient new ways to identify malicious users/processes.
  • [131059920050] |
  • The the priorities for Cpu-sheduling can be determined better as done by the "Wonder autocgroup patch".
  • [131059930010] |Measuring internet connection quality [131059930020] |I'm having WLAN trouble on my dualbooting win/linux laptop - the connection on linux side has occasional trouble, and seems somewhat slow and chunky. [131059930030] |I would try to fix this issue by fiddling with all sorts of router settings, or WLAN kernel modules backports, or whatever - but problem is, it's hard to know if the settings have had any effect, since the problem appears in only subjective feeling about connection speed. [131059930040] |So I'd like to have some way of executing a measurement that would tell me if the problem was affected or not. [131059930050] |So far I've just used iwconfig which returns normal data, and ping -c 100 -i 0.2 on the router and some stabile website ip addresses, but the summary doesn't give me all that good data, only the occasional packet loss. [131059930060] |One information that's missing from the summary is the count of packets with clearly deviating roundtrip time, since that's one of the symptoms I've noticed - most packets come back with a regular time, but some of them take a lot longer. [131059930070] |So what tools can I use to get some actual, numerical data on the quality of my internet connection? [131059930080] |(And just in case someone's wondering, yes, the problem is real and not just confirmation bias, as it sometimes appears bad enough to throw me off the WLAN connection. [131059930090] |It's probably somehow related to this Ubuntu bug and/or this Redhat bug) [131059940010] |Maybe setup smokeping on the Linux side, and point it at your AP? [131059940020] |Smokeping will periodically (configurable) send -20 pings at the same time, and then graph how how many returned and the range of times that they returned in. [131059940030] |If you have a lot of dropped packets, or the really wide range, then you should be concerned. [131059940040] |If you want to run smokeping you could use fping, which is what Smokeping is calling to collect the data. [131059940050] |It is a lot easier to interpret with the graph though. [131059950010] |Use tcpdump to capture packets that are leaving your local LAN subnet. [131059950020] |Then use tools like Wireshark or tshark to do some analysis on how much loss you're experiencing, as well as what the variance in round trip time is, and how TCP is behaving. [131059950030] |(Windowing, retransmits, etc). [131059950040] |The reason I suggest this rather than running some sort of ping/traceroute based monitoring software is that many network operators treat ICMP traffic (and generation of ICMP unreachables, which traceroute relies on) differently to actual UDP/TCP traffic. [131059950050] |Using an ICMP based tool may therefore give you spurious results. [131059960010] |Getting an existing Linux installation from one computer to boot on another [131059960020] |I have an existing Dell Precision 690 workstation setup to dual boot Windows XP and CentOS 5.5. [131059960030] |These operating systems are installed on two separate drives. [131059960040] |I have a grub menu on the Linux drive with it set as drive 1 and points to the windows boot info on drive 2. [131059960050] |I tried taking the linux drive and installing it in a new HP Z800 workstation to see if I could be lucky enough to get it to boot, but it didn't. Immediately after it starts to boot I get a few errors. [131059960060] |Here is what the system shows: [131059960070] |Right after this message Red Hat nash version 5.1.19.6 starting I get the following lines: [131059960080] |Is there something I can tweak to get this to possibly boot? [131059960090] |I'd really like to not have to reload CentOS 5.5 and the specialized software on this machine. [131059960100] |I do have a grub menu setup on this drive, could this by chance be my problem? [131059960110] |The drives in the old machine are setup with Linux as drive 1, and Windows as Drive 2, and the Linux drive hosts the grub menu allowing me to boot to Linux or Windows. [131059960120] |Could this some how be the problem? [131059960130] |I do know of a way around this with Windows: install a secondary HDD controller card in the machine, install the drivers, hooked up drive to controller in old machine and make sure it boots, move the drive and controller to the new machine and boot off it, load the motherboard drivers (specifically the hdd controller drivers) and then you can take out the controller card, connect the hdd directly to the motherboard and you're set. [131059960140] |This same thing is probably accomplish able in Linux, but I'm not sure. [131059960150] |This might be a last ditch effort to try if nothing else works. [131059970010] |If you get this far, it means the bootloader loaded the kernel and initrd/initramfs successfully, but the kernel is not finding the root device. [131059970020] |So you should be able to boot by passing something like root=/dev/sda42 on the kernel command line. [131059970030] |At the Grub prompt, edit the entry for Linux, and look for the line that begins with linux. [131059970040] |On that line, there should be a parameter that looks like root=/dev/sda42. [131059970050] |Change it to root=/dev/sdb42, i.e. a different drive. [131059970060] |The current letter might not be a, and the letter that works might not be b, though if you have two drives you'll probably just need to swap sdb for sda or vice versa. [131059970070] |The order of the drive letters in Linux is unrelated (or at least not directly related) to the order in the BIOS, in Grub or in Windows (it depends on the order in which the drivers are loaded). [131059970080] |(There are ways around this, but they won't help you right now.) [131059970090] |When you boot, you might get errors if entries in /etc/fstab don't match the current disk device names. [131059970100] |If you're not able to get to a repair console, reboot and (in addition to the root= change) add init=/bin/sh to drop directly to a shell, then run [131059980010] |Multibooting FreeBSD and other Linux distros. [131059980020] |Long story short: Have one laptop, need to multiboot FreeBSD/OpenBSD as well as a number of Linux distros. [131059980030] |Which one do you think should go first? [131059980040] |FreeBSD offers a bootMgr, but I'm not sure how it relates with GRUB. [131059980050] |How am I supposed to make the partitions knowing that a harddisk can take up to four primary partitions, perhaps make one primary, one extended and make logical partitions? [131059980060] |If so, how to do that with fdisk because the options aren't very clear about that. [131059980070] |Thank you! [131059990010] |Grub can boot FreeBSD and that's the way I'd do it because I'm more familiar with Grub. [131059990020] |I gave up on FreeBSD because of driver problems but I was able to dual boot it with Ubuntu and you should be able to do so as well. [131059990030] |Here is a post found by googling. [131059990040] |Regarding partitions, you can make any setup you want because both Linux and BSD can boot from logical partitions. [131059990050] |So you can have 1 extended partition with lots of logical ones, or 3 primary partitions and 1 extended partition. [131059990060] |It's up to you. [131059990070] |Update: in a comment AlexD stated that FreeBSD can only boot from a primary partition. [131059990080] |I'm not entirely sure about this but he is probably right. [131059990090] |In that case you should spend 3 primary partitions for BSDs and logical ones for Linux (I'm pretty sure Linux can boot from logical partitions). [131059990100] |fdisk deserves a separated question, but have you ever really tried to use it? [131059990110] |I find fdisk pretty straightforward. [131059990120] |If you find it complicated you can try a live CD with GParted. [131059990130] |The openSUSE live CD should have a GUI partitioning tool as well, but I'm not sure (I'm more familiar with Ubuntu). [131060000010] |I need help with grep and awk [131060000020] |I have created the alias below in my .bash_aliases file [131060000030] |alias auth="grep \"$(date|awk '{print $2,$3}')\" /var/log/auth.log|grep -E '(BREAK-IN|Invalid user|Failed|refused|su|Illegal)'" [131060000040] |This is supposed to: [131060000050] |
  • check todays date
  • [131060000060] |
  • grep auth.log for todays messages
  • [131060000070] |
  • grep todays messages for warning messages matching particular strings
  • [131060000080] |However, it only works when there's a 2-digit day because days numbered <10 do not have a preceding zero. [131060000090] |For example; [131060000100] |I run date and pipe the result to awk [131060000110] |date outputs Sat Jan 1 04:56:10 GMT 2011 and then awk captures $2 and $3 and feeds them into grep as follows [131060000120] |Jan 1 [131060000130] |However, when there's a single digit day, messages in auth.log appear as follows [131060000140] |So there are two spaces following Jan in the auth.log but only one space following Jan in my grep command [131060000150] |How can I modify the command to allow for the additional space? [131060010010] |Rather than using date | awk ..., you can use a format specifier with the date command for the format you want. [131060010020] |According to the date(1) man page, %b is the abbreviated month name, and %e is the day of month, space padded, same as %_d. [131060010030] |The following date command should give you a string in the form you want: [131060010040] |You can also put other characters into the format specifier, so if you use: [131060010050] |you'll get a grep pattern that matches the date only at the beginning of the line. [131060010060] |This would prevent any false matches where there is a date in the message part of the log. [131060010070] |As pointed out by Steven D, you can also do this with a single invocation of grep: [131060010080] |I've made a few changes based on issues mentioned in comments related to quoting. [131060010090] |My rules for quoting are to use single quotes when grouping separate words into a single word and to protect against shell expansion of metacharacters, and to use double quotes only when you want to expansion inside a multi-word string. [131060010100] |The original answer had the date format string in double quotes, which was wrong according to my above rules. [131060010110] |I've now changed that. [131060010120] |An edit put the grep string into double quotes. [131060010130] |I've put it back into single quotes because there is so often an overlap between shell metacharacters and grep regular expression (RE) metacharacters that you almost always want to single-quote REs to grep. [131060010140] |The current string may not need single quotes but if this shell function evolves over time, it may break with future changes. [131060010150] |Because the question was asking about a command to put inside an alias, there was an additional level of quoting that was not shown in this answer. [131060010160] |It would be simpler to use a shell function instead of an alias so you don't need to deal with this extra level of quoting. [131060010170] |Nested quoting can get messy quickly, so anything you can do to avoid it, you should do. [131060010180] |I have tested this as a shell function, using Gilles suggestion for futzing the date and it "works for me". [131060020010] |How to open the same directory in another panel in Midnight Commander? [131060020020] |In Midnight Commander, how to quickly set the right panel to the same directory as the left panel (and vice versa)? [131060030010] |Newer versions of Midnight Commander use Alt-o (also ESC followed by o) to do this. [131060030020] |Older versions used Alt-o for doing a change directory to the currently highlighted directory, so it will depend on which build you are using. [131060040010] |Why am I failing to mirror a web site (using wget)? [131060040020] |I have tried using wget --mirror http://tshepang.net/, but it only retrieves one page, "tshepang.net/index.html". [131060040030] |Is this a bug in wget? [131060040040] |Here's the output, from using the --debug option: [131060050010] |Assuming wget is in your path (if not, you’ll have to enter the full path) issue the following commands: [131060060010] |The --no-cookies option helped (thanks to wag): [131060060020] |It seems like all the redirection caused wget to interrupt the request. [131060060030] |Try with --no-cookies. [131060060040] |This was determined from reading the attached log. [131060070010] |You also need to set -r for recursive and -l X for link depth, where X is an integer. [131060070020] |It's also a good idea to set -A to set the list of acceptable file types to keep (otherwise you only get HTML files). [131060080010] |When changing distro [131060080020] |Hi, I wonder if I have previously installed ubuntu with root, home, and swap partition. [131060080030] |And now I want to change distro to arch linux. [131060080040] |Is it so that I only need to wipe my root-partition and install arch linux there instead? [131060090010] |that should do as far as i know, but it may also effect some dependancies whose effects may not be immediately visible [131060100010] |You can safely leave the swap partition as is, it can be shared among different distros. [131060100020] |The root partition definitely has to be wiped, as you expect. [131060100030] |The home partition is somewhere in the middle. [131060100040] |Of course your data and settings will not harm the new installation, but a difference in configuration options may give you weird errors. [131060100050] |A better approach is to back up the home partition somewhere, then install the new distro (wiping the home partition on the way). [131060100060] |When you are done with installing the new distro, simply recover from the backup. [131060100070] |Or, if you don't like backups, just cross your finger and install it that way, keeping the home partition. [131060100080] |In the case of errors, try creating a new user to check. [131060100090] |If the new user does not have the problem then you know you have to clean up your configurations :) I don't like this approach because it's less clean. [131060100100] |Arch and Ubuntu are so different that I'm quite sure there will be lots of unused dot files in your home directory. [131060110010] |HPUX setacl leaves uid behind [131060110020] |I have a shell script that I execute after uninstalling a web application. [131060110030] |The script is meant to clean up permissions that were needed during the execution of the application. [131060110040] |find /opt/path -exec setacl -d user:myUser {} ';' [131060110050] |After this executes and the acl is removed I am left with an acl that looks as follows [131060110060] |user:101:--- /opt/path [131060110070] |How can I properly call setacl to remove the user without leaving behind a uid? [131060120010] |Is user 101 the owner of the file? [131060120020] |If so, you need to change the file to a different user ID, with chown (in addition to, or in lieu of, the setacl call). [131060120030] |Every file belongs to one user and one group; ACLs come in addition to that. [131060120040] |Note that I've never used ACLs on HP/UX, so I may be missing something. [131060120050] |It might help if you showed the output of ls -ld /opt/path and getacl /opt/path before you run that find command. [131060130010] |If you've quoted your command accurately as: [131060130020] |you are missing a crucial space: [131060130030] |The former invokes undefined (or maybe implementation-defined) behaviour from find; it might or might not expand the file name when the {} is not in an argument on its own. [131060130040] |But it then invokes the setacl command with no filename; it combines the filename with the control argument user:myUser. [131060130050] |It is most unlikely to be correct as written - but I'm hoping that it is just a typo in your transcription from your system to SO. [131060140010] |wgetpaste alternatives? [131060140020] |Are there any wgetpaste alternatives? [131060150010] |I use an online service called sprunge.us. [131060150020] |It lets you post pretty simply like this [131060150030] |I have curl -F "sprunge=<-" http://sprunge.us | xclip aliased to webshare on my system, so it becomes simply command | webshare. [131060150040] |The added xclip at the end gets the url into the X clipboard; it's not on every system, and there are several other tools out there like it. [131060160010] |I use ix.io with an account set up in .netrc with its command line tool installed; its simple and cool. [131060160020] |Then you can either pipe stuff through it like the above answer: [131060160030] |or directly paste a file: [131060160040] |this returns the url. [131060160050] |Then I additionally set up a git alias for this so that I can easily paste my format-patches and get an url for it: [131060160060] |To paste a patch I do, for example: [131060160070] |or to paste whatever is in your current buffer in vim: [131060160080] |for uploading files, not too big: http://paste.xinu.at/ with its client. [131060170010] |Difference between /bin and /usr/bin [131060170020] |I read this up on this website and it dosent make sense. [131060170030] |http://rcsg-gsir.imsb-dsgi.nrc-cnrc.gc.ca/documents/basic/node32.html [131060170040] |When UNIX was first written, /bin and /usr/bin physically resided on two different disks: /bin being on a smaller faster (more expensive) disk, and /usr/bin on a bigger slower disk. [131060170050] |Now, /bin is a symbolic link to /usr/bin: they are essentially the same directory. [131060170060] |but when you ls the /bin folder it has far less content than the /usr/bin folder (atleast on my running system). [131060170070] |So can someone please explain the difference? [131060180010] |What? no /bin/ is not a symlink to /usr/bin at least not on any FHS compliant system. [131060180020] |/bin [131060180030] |contains commands that may be used by both the system administrator and by users, but which are required when no other filesystems are mounted (e.g. in single user mode). [131060180040] |It may also contain commands which are used indirectly by scripts [131060180050] |/usr/bin/ [131060180060] |This is the primary directory of executable commands on the system. [131060180070] |essentially, /bin contains executables which are required by the system for emergency repairs, booting, and single user mode. /usr/bin contains any binaries that aren't required. [131060180080] |I will note, that they can be on separate disks/partitions, /bin must be on the same disk as /. /usr/bin can be on another disk. [131060180090] |For full correctness, some unices may ignore FHS, as I believe it is only a Linux Standard, I'm not aware that it has yet been included in SUS, Posix or any other UNIX standard, though it should be IMHO. [131060180100] |It is a part of the LSB standard though. [131060190010] |There are many UNIX-based systems. [131060190020] |Linux, AIX, Solaris, BSD, etc. [131060190030] |The original quote gives historical context that applies to all flavors. [131060190040] |If you look on any one specific system, you will see different results. [131060190050] |The last sentence of the original quote is specific to only some versions and distributions. [131060200010] |On Linux /bin and /usr/bin are still separate because it is common to have /usr on a separate partition. [131060200020] |In /bin is all the commands that you will need if you only have / mounted. [131060200030] |On Solaris (and probably others) /bin is a symlink to /usr/bin. [131060200040] |Of particular note, the statement that /bin is for "system administrator" commands and /usr/bin is for user commands is not true (unless you think that bash and ls are for admins only, in which case you have a lot to learn). [131060200050] |Administrator commands are in /sbin and /usr/sbin. [131060210010] |/sbin - Binaries needed for booting, low-level system repair, or maintenance (run level 1 or S) [131060210020] |/bin - Binaries needed for normal/standard system functioning at any run level. [131060210030] |/usr/bin - Application/distribution binaries meant to be accessed by locally logged in users [131060210040] |/usr/sbin - Application/distribution binaries that support or configure stuff in /sbin. [131060210050] |/usr/share/bin - Application/distribution binaries or scripts meant to be accesed via the web, i.e. Apache web applications [131060210060] |*local* - Binaries not part of a distribution; locally compiled or manually installed. [131060210070] |There's usually never a /local/bin but always a /usr/local/bin and /usr/local/share/bin. [131060220010] |Is there a graphical front end to the Solaris Service Management Facility? [131060220020] |I have a few service errors at startup on an opensolaris installation that I keep put off fixing since I would need to figure out the line commands again. [131060220030] |Is there an easier to use front end? [131060230010] |The OpenSolaris package repository includes an administration GUI called Visual Panels you can install by running pkg install OSOLvpanels and then it will appear under the System->Administration menu in GNOME as "Services" or you can start it with the command vp svcs. [131060240010] |Is there a scripting language with C-like syntax? [131060240020] |I can neither code Bash nor Python (actually, I am only comfortable with C-like syntax), and, actually, am too busy and lazy to learn them now. [131060240030] |But I would like to script some tasks. [131060240040] |Is there a scripting language for GNU/Linux, having obvious and comfortable syntax for C/C++/C#/Java developers? [131060250010] |Install and try tcsh. [131060250020] |You can also make it your default shell, if you want. [131060250030] |Although I don't recommend it. ;-) [131060260010] |If you're comfortable with Java, then try groovy, a scripting language based on the Java platform. [131060260020] |There is almost zero learning curve. [131060270010] |Pike is a scripting language with a C-like syntax. [131060270020] |You've never heard of it? [131060270030] |Consider this a point against: it's rarely installed by default, it doesn't have many libraries, there's not much literature about it, there aren't many people who can help you with it… [131060270040] |Just pick Perl or Python, the two major scripting languages on unix systems (plus the shell, but it's a trickier language and has a less general scope.). [131060270050] |The syntax is only 1% of learning a language anyway. [131060270060] |I'd recommend Python as the simplest of the two. [131060280010] |If you really want to program in C but not have the long steps of compiling &linking, check out the TCC: the Tiny C Compiler. [131060280020] |It even supports running via shebang. [131060290010] |Ch is a C and C++ interpreter, it can be used for scripting. [131060300010] |CINT is another C &C++ interpreter. [131060300020] |I don't know if you'd want to have to wrap commands in [131060300030] |though. [131060300040] |Maybe a macro like the following (untested) [131060300050] |might be useful in that approach, e.g., S("cp a b") [131060300060] |Maybe not :) [131060310010] |There are some packages available for node that facilitate system scripting. [131060310020] |The node package manager is probably the easiest way to install such packages; node itself can be built from source (with the v8 engine it runs on) or installed via some system package managers. [131060310030] |You may need to learn to use evented I/O in order to get much done. [131060320010] |You should search for "learn python in 10 minutes". [131060320020] |It covers the most useful python features: lists, tuples, dictionnaries, classes, and of course its awesome indentation system. [131060320030] |Learn it, I personnaly considering python as important after C\C++, because it does so much by default, and as a scripting language, it serves a lot. [131060320040] |Advantages: [131060320050] |
  • Features everything you'll need as a programmer
  • [131060320060] |
  • A VERY clean and easy syntax, its author says it can be 3 to 4 times more productive than c/c++
  • [131060320070] |Disadvantages: [131060320080] |
  • Speed, but if you're not programming low level where performance matters, it's sufficient.
  • [131060330010] |php-cli, can be quite useful. [131060330020] |php has a bad reputation, but since php version 5 the language is actually quite ok. [131060330030] |And the syntax is similar to C/C++/Java. [131060340010] |pacman and powerpill not working [131060340020] |Both pacman and powerpill don't seem to be working on my Arch installation. [131060340030] |I'm sure my Internet connection is working. [131060340040] |The download just starts in powerpill but speed remains zero. [131060340050] |In pacman, there's an error saying 'No file address found'. [131060350010] |Update your mirrorlist file(/etc/pacman.d/mirrorlist). [131060350020] |And run "pacman -Sy" before trying to install a package. [131060350030] |You can use kernel.org's mirrors: [131060360010] |For what purpose would perl* be excluded in yum.conf? [131060360020] |I am not an Unix/Linux admin. [131060360030] |I was attempting to install git on our server, and after much googling and trial and error, I discovered that perl* is excluded in our yum.conf file, preventing perl-Git from installing, which is a dependency of git-core. [131060360040] |I understand what this does, but I'm not sure why it has been done. [131060360050] |There are a lot of things excluded in this file, and I just want to understand the reasons one might want to exclude something, whether it's a security issue, or what. [131060360060] |Our original server administrator was killed in a boating accident. [131060360070] |We are using CentOS 5.4, and I am using yum for my attempts at installing git. [131060370010] |It might make sense to temporarily exclude a package from installation if the available version is known to be buggy, though this would rarely occur on a server where one generally installs distributions that don't update often except for bug fixes. [131060370020] |A reason that comes to mind for excluding perl specifically is if there is a separate installation of perl, possibly directly from CPAN, possibly shared or synchronized with other machines on the network to ensure consistent sets of installed libraries and versions. [131060370030] |Look in /usr/local or opt for an alternate perl installation, check for a PERL5LIB setting in /etc/profile. [131060370040] |I wouldn't do it that way, because as you noticed it will break dependencies, but I can see why someone might be tempted. [131060370050] |Maybe if you post the full set of exclusions someone will spot a pattern. [131060370060] |Is there any comment in the file that might give a hint? [131060370070] |To avoid this kind of issue in the future, you should put all configurations under version control. [131060370080] |Then the changelog would indicate when the surprising configuration was set up, and hopefully why. [131060370090] |On Debian/Ubuntu I use etckeeper, which I think has been packaged for CentOS too. [131060370100] |On a multi-administrator machine, it should be set up never to commit changes automatically, forcing the administrator to make an explicit commit before they can run yum install or yum update. [131060380010] |cPanel keeps track own copy of Perl. [131060380020] |The default install adds that exclude rule. [131060380030] |I think they do it because many people rely heavily on cPanel working and doing all server the work and there may have been issues in the past regarding the packages and Perl. [131060380040] |You can install git by using the --disableexcludes option to disable the excludes on the repository: [131060390010] |It is unlikely that installing Perl into its normal root will interfere with cPanel, depending on the configuration. [131060390020] |What does which perl return? [131060390030] |Technically you can install Git, or even Git plus its dependencies, without having Perl installed. [131060390040] |Please note that doing so may affect certain functionality within Git. [131060390050] |yum -y install yum-downloadonly &&yum install --downloadonly --downloaddir=/foo/bar/ git [131060390060] |This will download current rpm's for Git and its dependencies (perl-Error and perl-Git) to /foo/bar/. [131060390070] |Now you can rpm -ivh --nodeps /foo/bar/{git,perl-{Error,Git}}*.rpm [131060400010] |Can I set my local machine's terminal colors to use those of the machine I ssh into? [131060400020] |I have a color scheme that I like for when I'm in a terminal, but I ssh into the machine I work on from multiple sources (locally, PuTTY, my netbook, etc.) and I want to maintain the same color scheme throughout. [131060400030] |Is this possible? [131060400040] |I especially want it in PuTTY; it's difficult to change PuTTY colors. [131060410010] |Colors in terminals are determined in two steps: [131060410020] |
  • the program running in the terminal tells the terminal to use a certain color number;
  • [131060410030] |
  • the terminal translates each color number into a color value.
  • [131060410040] |Xterm has an escape sequence to change the color value associated with a color number. [131060410050] |I don't remember whether PuTTY supports this sequence; I know Mintty does. [131060410060] |These settings won't survive a terminal reset. [131060410070] |You can overcome this difficulty by appending the cursor configuration changing sequence to your terminal's reset string. [131060410080] |
  • On a terminfo-based system using ncurses, save your terminal's terminfo settings to a file with infocmp >>~/etc/terminfo.txt. [131060410090] |Edit the description to change the rs1 (basic reset) sequence, e.g. replace rs1=\Ec by rs1=\Ec\E]4;4;#6495ed\E\\. [131060410100] |With some programs and settings, you may need to change the rs2 (full reset) as well. [131060410110] |Then compile the terminfo description with tic ~/etc/terminfo.txt (this writes under the directory $TERMINFO, or ~/.terminfo if unset).
  • [131060410120] |
  • On a termcap-based system, grab the termcap settings from your termcap database (typically /etc/termcap). Change the is (basic reset) and rs (full reset) sequences to append your settings, e.g. :is=\Ec\Ec\E]4;4;#6495ed\E\\:. [131060410130] |Set the TERMCAP environment variable to the edited value (beginning and ending with :).
  • [131060410140] |Now you can put something like this in your ~/.profile: [131060420010] |You're ssh-ing into just one box right? why not just set the PS1 variable on that box to use the colorscheme you want? [131060420020] |If you keep it to 16 colors you shouldn't have a problem on any modern TERM, most should support 256 colors, but most don't set TERM=xterm-256color out of the box, and some fools (cough my employer cough) sanitize TERM to be alpha-numeric only. [131060420030] |Unfortunately what to put in your PS vars, is highly dependent on the shell you are using. [131060430010] |Filesystem type is ext2fs, partition type 0x83 [131060430020] |I installed Ubuntu 10.04 (LTS) to a VPS server I'm renting and get this critical error when booting: [131060430030] |This prevents the rest of the boot process to continue. [131060430040] |When "Googling", I found no answers besides "re-install". [131060430050] |I already tried that 3 times, and am still experiencing the same problem. [131060440010] |This is not an error, this is a standard output from GRUB. [131060440020] |(Although in your case it could be a coincidence...) [131060440030] |Are you sure your grub.cfg / menu.lst is configured correctly? [131060440040] |My GRUB normally outputs this line after the root (hd0,X) command... [131060440050] |I can't tell much more without some extra details of what software you're running, full output, and at what part of the boot process this occurs :) [131060450010] |DPMS does not work: the monitor is not switched off [131060450020] |I have a monitor which was properly switched off by my Debian PC when unused. [131060450030] |I attached it to another machine and, this times, it is never switched off. [131060450040] |In /etc/X11/xorg.conf, I have: [131060450050] |It is recognized when X11 starts: [131060450060] |The operating system is Debian 5 (Lenny). [131060450070] |The graphics card is: [131060450080] |X11 is: [131060460010] |What sudoer spec allows users to mount cifs shares? [131060460020] |I'm trying create a line in /etc/sudoers that allows members of group "users" to mount cifs shares anywhere inside their own home directory. [131060460030] |In my first attempt I tried: [131060460040] |...which admittedly doesn't restrict them to their own home directory. [131060460050] |As a user when I try the command: [131060460060] |...I get prompted for the password then receive error: [131060460070] |Is there any way to coerce sudoers to specify what I want? [131060470010] |I'm not sure why your requirement must allow users to mount the devices anywhere in their home directory. [131060470020] |Security to keep the device private to them I suppose? [131060470030] |Anyways, if you can handle having public,static mount points you could add entries to /etc/fstab for the cifs shares and add the "users" attribute to let users mount/unmount them. [131060470040] |The line would look something like this: [131060470050] |See http://www.tuxfiles.org/linuxhelp/fstab.html [131060470060] |I realize this doesn't solve the exact issue you presented but maybe it gives you some ideas for a compromise. [131060480010] |I figured out how to do it, less the restriction that you are in your own directory: [131060480020] |Does anyone have an idea how to restrict a user to hiw own home directory? [131060490010] |You might be better off giving your users the ability to use FUSE filesystems to mount their cifs shares. [131060500010] |I can't help but think of pam_mount solving half of the puzzle already by giving users the ability to mount such networked volumes on start of a session. [131060510010] |It's worth noting that some recent versions of mount.cifs fail unless the mount point is in /etc/fstab, even if they are installed setuid, so I would expect your sudo approach to fail with those versions. [131060510020] |http://fedoraforum.org/forum/showthread.php?p=1329591 [131060510030] |https://bugs.launchpad.net/ubuntu/+bug/657900 [131060510040] |As an alternative, you might try one of these: [131060510050] |SMBNetFS [131060510060] |FuseSMB [131060520010] |Is there any env variable to turn --color (and the like) on for all commands? [131060520020] |I do use aliases to turn on color for some commands by default. [131060520030] |But I'm wondering if there's an easier way at telling my system, color is supported, don't make me use --color for grep, ls, etc. [131060530010] |FreeBSD has CLICOLOR. [131060530020] |On Linux and any other system with GNU tools, you need to set LS_COLORS, GREP_COLOR, and GREP_OPTIONS='--color=auto', but even then you still need to run ls --color=auto. [131060530030] |Run info coreutils 'ls invocation' for more details. [131060530040] |The easiest way I know to avoid typing --color on Linux is to make ls run ls --color=auto using an alias. [131060530050] |This is what I put in my .bashrc (well, really my .env, but it's like .bashrc) to make it happen by default: [131060540010] |Xterm is not completely erasing field lines [131060540020] |We are running Windows Clients with Cygwin XServer that has a bash script application running on AIX Unix. [131060540030] |We have a login script using expect to ssh into the server and then xterm to create the client terminal. [131060540040] |This is working fine except on any form screen, after the fields are updated and the __ line is erased a single . is left at the end. [131060540050] |I tried different fonts and sizes but no matter what I do, that single . is left after the line is erased. [131060540060] |Any ideas? [131060550010] |In expect you can clear the screen using the raw vt100 commands: [131060550020] |That was the solution to my question on stackoverflow. [131060550030] |Perhaps it can help you. [131060550040] |An example of setting an interact "hook" into the expect script on your spawned ssh session might look something like this: [131060550050] |Then only if you hit that Ctrl+A keystroke do you send the clear command. [131060550060] |You could also interact then take action on seeing a certain field or character on screen. [131060560010] |Small apt based linux. [131060560020] |I want to setup some VMs running as small a Linux as I can. [131060560030] |The criteria: [131060560040] |
  • Package system based on Apt.
  • [131060560050] |
  • Runs some GUI ( can be very small ).
  • [131060560060] |
  • Runs in as little ram as possible: in this context 64M is good and 256M is bordering on too much.
  • [131060560070] |
  • Install on a hd, not ram resident.
  • [131060560080] |
  • As little HD space as possible. [131060560090] |Ideal would be 1G.
  • [131060560100] |
  • Fast boot and shutdown times.
  • [131060560110] |Suggestions? [131060570010] |Debian [131060570020] |According to them 64 mb of RAM are enough to run it with a GUI and they are the original Apt distribution. [131060570030] |You should bear in mind that 256 mb is recommended even without a GUI, though. [131060570040] |They do list a 5 GB HD for a "desktop" but you should be able to install many window managers/web browsers/etc within the 1 GB limit if you start from the minimal install. [131060570050] |2 GB per virtual disk would probably be best though, or you risk running out of swap memory. [131060570060] |If you are familiar enough with apt(itude), it shouldn't be hard to add just the software you need. [131060570070] |Any minimalist distro will be biased to the authors' goals and stop getting updates when you need them. [131060580010] |Damn Small Linux will make Debian look huge. [131060580020] |If HD is only 50MB, you can believe memory footprint is small too. [131060580030] |It's based on knoppix, which is based on debian, so AFAIK, it uses apt [131060580040] |Damn Small Linux is a very versatile 50MB mini desktop oriented Linux distribution. [131060580050] |DSL was originally developed as an experiment to see how many usable desktop applications can fit inside a 50MB live CD. [131060580060] |It was at first just a personal tool/toy. [131060580070] |But over time Damn Small Linux grew into a community project with hundreds of development hours put into refinements including a fully automated remote and local application installation system and a very versatile backup and restore system which may be used with any writable media including a hard drive, a floppy drive, or a USB device. [131060580080] |Important note: Apparently Damn Small Linux is no longer maintained [131060590010] |I am not aware of any apt-based Linux that is actually small. [131060590020] |However, if you remove that one requirement (apt-based), I can recommend Slitaz. [131060590030] |The ISO file is 30MB, it comes with a GUI and a functional Web browser. [131060590040] |It is actually still maintained as opposed to DSL which seems to have been abandoned for a while. [131060590050] |Slitaz uses tazpkg for management, which from my opinion is as easy as apt: [131060600010] |Debian can be quite small. [131060600020] |During the install when you get to tasksel, unselect everything. [131060600030] |You'll get a very minimal system taking up only 512M. Even then, you can still remove packages that you won't use. [131060610010] |Crunchbang Linux [131060610020] |It used to be based on Ubuntu, but now it's based on Debian. [131060610030] |It comes with Openbox as the default window manager. [131060610040] |The default Debian install uses Gnome which is quite a bit heaver than Openbox.