[131062430010] |is there any way to keep a command from being added to your history? [131062430020] |is there any way to keep a command from being added to your history? [131062430030] |I have a command that I want to keep out of my history file, and I really don't care to have it there when I search the history stored in memory, though that's less of a concern. [131062430040] |Is there any way to prevent this? or do I just have to go back and edit my history file. [131062430050] |update: I didn't realize this might be shell specific. my shell is zsh how would I do it in zsh? (though you're welcome to answer for other shell's, including bash, so people know how to do this in their shell, but those answers won't help me specifically) [131062440010] |If you're running the command over and over and your shell is bash, the HISTIGNORE variable will do this. [131062440020] |Lets say you have secret.server.com that you ssh to, FTP files to, etc. that you don't want any line that mentions secret.server.com saved: [131062440030] |You can list multiple patterns with a colon separating them. [131062440040] |Make sure sure you include a * if needed since in the below example, 'fortune' would be excluded, but 'fortune -l' would included': [131062440050] |With HISTIGNORE set, nothing matching the patterns you list will be saved to .bash_history and even the up arrow key, which normally recalls your previous command, won't work if it matches your pattern. [131062450010] |In bash, use the HISTCONTROL variable. [131062450020] |Set it to HISTCONTROL=ignorespace (or HISTCONTROL=ignoreboth). [131062450030] |From now, when you begin a line with a space and it will not be saved in the history. [131062450040] |This avoids to include the not-to-be-disclosed-command in some configuration file. [131062450050] |Even like that it happens to forget to add the space and then want to go back. [131062450060] |To delete one entry in the history, use history -d , with index the number found with the history command (first column). [131062460010] |In ZSH: [131062460020] |First set HIST_IGNORE_SPACE in your profile and then prefix the commands you don't want stored with a space. [131062460030] |From the man page, the following 3 options can be used to say that certain lines shouldn't go into the history at all [131062460040] |
  • HIST_IGNORE_SPACE don't store commands prefixed with a space
  • [131062460050] |
  • HIST_NO_STORE don't store history (fc -l) command
  • [131062460060] |
  • HIST_NO_FUNCTIONS don't store function definitions
  • [131062470010] |The easiest way for a one-off is [131062470020] |where $ represents the prompt of the initial shell and $$ represents the prompt of the subshell. [131062470030] |This works in bash and zsh; in ksh you need HISTFILE=/notwritable instead. [131062470040] |Note that what matters is the value of HISTFILE when the history file is written, not when the command is run. [131062470050] |Bash, ksh and zsh write to the history file when they exit (zsh can do it more often depending on configuration). [131062470060] |Alternatively, you can get bash or zsh to ignore lines matching certain patterns (this is already covered by other answers). [131062470070] |Another possibility in zsh is [131062470080] |fc -p pushes the current history list onto a stack and starts a new one that isn't associated with a save file. fc -P pops that unsaved history list off the stack and forgets about it altogether. [131062470090] |While a process is running, the command and its arguments (and on some systems its environment as well) will show up in the process table (ps output). [131062470100] |Root can see past commands in the system accounting database, and can do more logging without your control. [131062470110] |And there'll probably be all kinds of forensic evidence that can show roughly what command you ran (such as file modification times). [131062470120] |So you don't get privacy against determined eavesdroppers. [131062480010] |Can I spawn a new terminal that is a clone of the current terminal? [131062480020] |So let's say I'm developing code in directory /asdf/qwer/dfgh/wert/asdf/qwer and I've added about three more directories like that to my path and I have a bunch of arcane environment variables set. [131062480030] |Then I realize that I really need another terminal open and set up in just this same way (although this need is not reoccurring so that I would just alter my .bashrc). [131062480040] |Is there any command to open a new terminal window that is an exact clone of this one? [131062490010] |If you start a screen (GNU Screen) in your defined environment, that environment will be used by the sub-process (i.e. screen) and you can use it to spawn new terminals. [131062490020] |But if you want to fork it another time (a screen in a screen) it begins to be tricky. [131062500010] |Cloning the path is easy if you can run your terminal program from the command line. [131062500020] |Assuming you're using xterm, just run xterm &from the prompt of the terminal you want to clone. [131062500030] |The new xterm will start in the same directory, unless you have it configured to start as a login shell. [131062500040] |Any exported environment variables will also carry over, but un-exported variables will not. [131062500050] |A quick and dirty way to clone the whole environment (including un-exported variables) is as follows: [131062500060] |If you've set any custom shell options, you'll have to reapply those as well. [131062500070] |You could wrap this whole process into an easily-runnable script. [131062500080] |Have the script save the environment to a known file, then run xterm. [131062500090] |Have your .bashrc check for that file, and source it and delete it if found. [131062500100] |Alternately, if you don't want to start one terminal from another, or just want more control, you could use a pair of functions that you define in .bashrc: [131062500110] |EDIT: Changed putstate so that it copies the "exported" state of the shell variables, so as to match the other method. [131062500120] |There are other things that could be copied over as well, such as shell options (see help set) -- so there is room for improvement in this script. [131062510010] |In a similar situation, I also found it useful to start the new shell in the same directory as the current one. [131062510020] |I used a recipe like this to start the shell. [131062510030] |The -t option is needed whenever you explicitly run a shell using ssh. [131062510040] |It causes a pseudo-tty to be created for the process. [131062510050] |This is necessary for history commands and other interactive features to work correctly. [131062510060] |Earlier lines in the script set DIR to the current directory and SHELL to the user's preferred shell. [131062520010] |What causes a ssh interruption? [131062520020] |Hi, What exactly causes a ssh connection to interrupt? [131062520030] |When you idle for a while, it disconnects. [131062520040] |How do I keep the connection alive (without autossh or reconnect)? [131062530010] |Here are a few things you can try: [131062530020] |1) It's most likely the shell which is timing out. [131062530030] |Disable the timeout by unsetting TMOUT in your profile. TMOUT is the number of seconds that bash waits for input before terminating. [131062530040] |Echo $TMOUT to see if it is set. [131062530050] |Add the following to your profile: [131062530060] |2) Configure PuTTY to send keepalive packets by going into: [131062530070] |3) Tweak your sshd_config (normally found in /etc/ssh) and add: [131062530080] |Save the file and restart sshd. [131062540010] |This is most likely a firewall which cuts your idle connection after a while. [131062540020] |You can configure the openssh server or client to send a KeepAlive after a while. [131062540030] |Send a KeepAlive every 5 minutes to the server: [131062540040] |If you have control over the openssh-server, you can also send KeepAlives to the client after a defined interval. [131062540050] |Add the following to /etc/ssh/sshd_config: [131062540060] |TCPKeepAlive should be yes by default. [131062540070] |Restart the openssh-server after the modification: [131062550010] |I can't use or find the google debian unstable/sid repository. [131062550020] |I'm trying to use the debian unstable/sid branch of the google repositories (deb http://dl.google.com/linux/deb/ unstable non-free main), but I get errors: [131062550030] |Err "http://dl.google.com" unstable/non-free i386 Packages 404 Not Found [IP: 209.85.146.93 80] Err "http://dl.google.com" unstable/main i386 Packages 404 Not Found [IP: 209.85.146.93 80] W: Failed to fetch "http://dl.google.com/linux/deb/dists/unstable/non-free binary-i386/Packages.gz 404 Not Found [IP: 209.85.146.93 80] W: Failed to fetch "http://dl.google.com/linux/deb/dists/unstable/main/binary-i386/Packages.gz 404 Not Found [IP: 209.85.146.93 80] [131062550040] |I get the same errors for deb "http://dl.google.com/linux/deb/ sid non-free main". [131062550050] |I can't browse the repository at "http://dl.google.com", I just get a 404. [131062550060] |Is it ok to use the stable repository instead even though I'm using debian unstable? [131062560010] |Yes : stable here means "the stable branch of Google's repo" not "packages for Debian Stable". [131062570010] |Starting A Bind/DHCP server [131062570020] |Hi, [131062570030] |I am looking for some reading material on starting my own BIND/DHCP server. [131062570040] |I mainly want the bind server to just be a caching server and for my home computers but it may also need to be authoritative for a domain that I may buy later. [131062570050] |Also some material on starting a DHCP server would be great. [131062570060] |Also I want to run this on either freebsd or openbsd. [131062570070] |Thanks in advance. [131062580010] |as I have little knowledge on freebsd, you can refer to below pages for info [131062580020] |http://www.freebsd.org/doc/handbook/network-dns.html http://www.freebsd.org/doc/handbook/network-dhcp.html [131062580030] |personally, I prefer to run DHCP on a network device such as DSL Router or Wi-Fi Router. [131062590010] |Here's a named.conf that will just work for caching, it's basically the default, but queries the OpenDNS servers for DNS ins the fordwarders section. [131062590020] |I'm pretty sure none of the zone's are required but I leave them there anyways. [131062590030] |It should work wherever bind does. [131062590040] |You may want to allow, recursion and query on more than just 127.0.0.1 though. [131062600010] |I wanted to do exactly what you want to do, but I did it with Linux. [131062600020] |I seriously doubt there's much difference in this case, however. [131062600030] |I read IBM developerworks articles on DHCP and BIND, and got them up and running: http://www.ibm.com/developerworks/linux/tutorials/l-lpndns/ http://www.ibm.com/developerworks/linux/tutorials/l-lpic2207/index.html http://www.ibm.com/developerworks/linux/tutorials/l-lpndhcp/index.html [131062600040] |BIND ended up taking a long time for non-cached requests, and periodically Firefox/Chrome/Safari would decide to time out. [131062600050] |I ended up running DNSMASQ: [131062600060] |http://www.thekelleys.org.uk/dnsmasq/doc.html [131062610010] |Both FreeBSD and OpenBSD ship with BIND as the preinstalled name server. [131062610020] |There is a good introduction in the FreeBSD handbook. [131062610030] |For OpenBSD, there is a lot of information on Kernel Panic. [131062610040] |FreeBSD doesn't include a DHCP server in its default installation, but officially recommends the ISC DHCP server; see the handbook. [131062610050] |OpenBSD does include a DHCP server, there is a tutorial in the FAQ. [131062610060] |For home use, there are alternatives such as dnsmasq which are easier to configure but have fewer capabilities. [131062610070] |Dnsmasq is suitable for embedded systems (many open source home routers run it), and includes both a simple name server (mostly for caching) and a simple DHCP server. [131062610080] |It is available as a port on FreeBSD and OpenBSD. [131062620010] |viewing foreground process using ps [131062620020] |Hi, I can see the background process using ps. [131062620030] |But Is there a way to view the foreground process?. [131062620040] |For example, [131062620050] |$nohup process1 & [131062620060] |then [131062620070] |ps -ef | grep "process1" [131062620080] |would display the process "process1" in execution. [131062620090] |But the above command wouldn't show a foreground process executed like, [131062620100] |$process2 [131062620110] |Thanks in Advance. [131062630010] |Weird, the ps command allows you to monitor the status of all active processes, both foreground and background. [131062630020] |For example, I start the following process in a first shell: [131062630030] |And in another shell: [131062630040] |The process is listed as expected (and the + shows that it is in the foreground process group). [131062640010] |You might be confused because ps by default shows you the processes which are on the same terminal where ps is invoked, e.g. processes started from the same terminal window. [131062640020] |Try ps -u $LOGNAME or if you know the terminal names ps -t $THETTYNAME1,$THETTYNAME2. [131062640030] |(The terminal names normally look like "ttyN" or "pts/N"). [131062650010] |How to retrieve monitors configuration from the command line? [131062650020] |Hello, I am making a script that needs to access the computer's monitor(s) configuration. [131062650030] |How can I do that? [131062650040] |Is there a command or a file I could read where I can access this information? [131062650050] |At the moment, I do [131062650060] |But I only have the total resolution and not the details (What I need is the resolution of each screen individually). [131062650070] |Thanks! [131062660010] |This is heavily dependent on the set up of the system. [131062660020] |One way to get the information would be if xrandr is being used: [131062660030] |This will display something like: [131062660040] |You could then use some text processing tool to pull out the resolution for each display. [131062670010] |You could try using the tool monitor-edid, which produces output like this [131062670020] |This is useful if you don't want to have X running when you want to probe your monitor information. [131062680010] |xrandr only works on newer X servers with the RandR extension. [131062680020] |Granted, that should be true of everything these days, but in case not… [131062680030] |xdpyinfo also prints out per-screen information, including dimension (pixel and physical size). [131062690010] |How do I periodically run a command with very short intervall and get the return? [131062690020] |Hi, [131062690030] |i need to call a specific command in an interval of about 5 sec. [131062690040] |How would I setup a daemon/process running in the background or something similiar to do that? [131062690050] |I looked at cronjobs, but the minimum intervall seems to be 1 minute. [131062690060] |Any advice is apperciated ;) [131062690070] |Fedora is the system. [131062690080] |EDIT the command would be a bashscript, so "watch" wouldn´t do it I think. [131062710010] |Why do you think 'watch' will not work? [131062710020] |$ cat periodic.sh [131062710030] |$ chmod +x periodic.sh [131062710040] |$ watch -n 5 ./periodic.sh [131062720010] |Maybe with nohup? [131062720020] |It's designed to let a job run after the shell is closed. [131062720030] |You can also use screens. [131062730010] |Why is this network connection so slow? [131062730020] |I am having some problems with network performance speed on a Linux server running Ubuntu 9.10. [131062730030] |Transfer speeds on all types of traffic are around 1.5MB/s on a 1000mbit/s wired ethernet connection. [131062730040] |This server has achieved 55MB/s over samba in the recent past. [131062730050] |I have not changed the hardware or network set-up. [131062730060] |I do run updates on a regular basis and the latest and greatest from Ubuntu's repositories is running on this machine. [131062730070] |

    Hardware set-up

    [131062730080] |Desktop Windows PC - 1000 switch - 1000 switch - Linux server [131062730090] |All switches are netgear, and they all show a green light for their connections which means the connection is 1000mbit/s. [131062730100] |The lights are yellow when the connection is only 100mbit/s. [131062730110] |Other diagnostic information: [131062730120] |The server thinks its got a 1000mbit/s connection. [131062730130] |I have tested the speed of transfer by copying files using Samba. [131062730140] |I have also used netcat (nc target 10000 Duplex -- if one side thinks the link is full duplex and the other side thinks the link is half duplex, expect badness. [131062750030] |
  • Defective switch? [131062750040] |Bypass it/them.
  • [131062750050] |
  • Jumbo frames. [131062750060] |9000 byte MTU decreases overhead, which should increase throughput (forfeiting a little latency). [131062750070] |It sounds like your problem is so bad that this won't help, though.
  • [131062750080] |
  • TCP features: ECN, SACK, congestion control alg
  • [131062750090] |
  • TCP Send/Receive window sizes (details for linux)
  • [131062750100] |netperf is great at troubleshooting network performance. [131062750110] |But netcat's not bad in a pinch. [131062760010] |
  • try "netstat -i" and look for rx/tx errors
  • [131062760020] |
  • try "netstat -s" and look for tcp issues - compare values before and after the file copy and look for large spikes in resets or retransmits
  • [131062770010] |If at all possible, to remove most doubt that it is indeed an OS/driver/card issue, connect the computers together using a cross over cable. [131062770020] |This will remove the switch and other possible networking issues from your equation. [131062780010] |In my professional experience, I've struggled to get good solid network performance with Samba on GNU/Linux. [131062780020] |You mentioned you have achieved speeds of 55 MBps with it, which I believe, so I'm guessing something else is definitely at play. [131062780030] |However, have you tried NFS, FTP and SCP? [131062780040] |Are the bandwidth issues consistent across the different protocols? [131062780050] |If so, it's likely narrowed down to the physical connection. [131062780060] |If you get inconsistent results, then it's likely a software problem. [131062780070] |Aside from testing the other protocols, are you using encryption on the transfer? [131062780080] |For example, using rsync -z is sweet for enabling compression, but it's comes at a CPU cost, which severely impacts overall speed of the transfer. [131062780090] |If using SSH with rsync, then you have encryption on top of compression, and your CPU will be under a bit of stress, causing severe speed penalties. [131062790010] |testing services/open ports with telnet? [131062790020] |I often see that folks test ports this way: [131062790030] |AFAIK telnet was the old way of getting onto some remote box - right? or so I thought... [131062790040] |Why exactly can you connect via telnet to smtp port for example? [131062800010] |Telnet is a very simple protocol, where everything that you type in your client (with few exceptions) go to the wire, and everything that comes from the wire is shown in your terminal. [131062800020] |The exception is the 0xFF byte, that setups some special communication states. [131062800030] |As long as your communication doesn't contain this byte, you can use telnet as sort of a raw communication client over any TCP port. [131062800040] |IOW: It is purely for convenience. [131062810010] |It's for convenience, but it's also a lower-than-user-level diagnostic. [131062810020] |You can isolate the problem you're having with a service that way, for example: Joe has a database server and client. [131062810030] |They are not communicating. [131062810040] |Is the problem on the network? [131062810050] |The server? [131062810060] |The client? [131062810070] |Joe goes to the client machine and opens a shell. [131062810080] |He uses telnet, just as you described: [131062810090] |and types a command as if he were the client program [131062810100] |WHO; [131062810110] |The server replies with [131062810120] |(It's a very dumb server) [131062810130] |So then Joe knows that the network link to the server works, and that his client is likely not configured correctly. [131062820010] |Why exactly can you connect via telnet to smtp port for example? [131062820020] |because both smtp and telnet protocols are implemented as plain-text. [131062820030] |So with a telnet client, you can basicly go connect any port with a specific protocol that implements plain-text &you know how to communicate using the protocol. [131062830010] |Telnet was designed as a remote terminal application utilizing a socket, plain text and a few control characters. [131062830020] |Its use in this way has been mostly replaced by ssh. [131062830030] |Any telnet client can be used to interface with any protocol that is implemented in plain text and that's how it's most commonly used today. [131062830040] |Although telnet wasn't designed for this purpose it works perfectly using it this way. [131062830050] |Netcat actually was designed for this exact use (opening a socket and spitting raw data over it for any purpose). [131062830060] |I generally prefer netcat, but telnet is pretty much universally available. [131062840010] |Dtach and Vim over Ssh: Reattach Session [131062840020] |I connected to my server: [131062840030] |While in the server, I open a dtach session with vim: [131062840040] |Then my ssh session dies. [131062840050] |I attempt to go back to my session: [131062840060] |I do not get vim, but instead just a blinking cursor. [131062840070] |I cannot do anything from there, not even ctrl+c, I can only detach the dtach with ctrl+\. [131062840080] |How can I get my dtach session back properly? [131062840090] |Or am I missing the idea of how dtach is supposed to behave? [131062840100] |N.B. I am well aware of the tmux and screen utilities. [131062850010] |Isn't GNU Screen is what you are up to? [131062860010] |Perhaps you did get what you want, but you need to redraw the screen? [131062860020] |Try pressing CTRL + L. [131062870010] |I think you can prevent this by passing a WINCH signal to dtach: [131062870020] |Or at reattachment: [131062880010] |JAVA_HOME not set in script when run using sudo [131062880020] |I'm trying to run an install script that requires java to be installed and the JAVA_HOME environment variable to be set. [131062880030] |I've set JAVA_HOME in /etc/profile and also in a file I've called java.sh in /etc/profile.d. [131062880040] |I can echo $JAVA_HOME and get the correct response and I can even sudo echo $JAVA_HOME and get the correct response. [131062880050] |In the install.sh I'm trying to run I inserted an echo $JAVA_HOME. [131062880060] |When I run this script as myself I see the java directory, but when I run the script with sudo it is blank. [131062880070] |Any ideas why this is happening? [131062880080] |I'm running CentOS [131062890010] |For security reasons, sudo may clear environment variables which is why it is probably not picking up $JAVA_HOME. [131062890020] |Look in your /etc/sudoers file for env_reset. [131062890030] |From man sudoers: [131062890040] |So, if you want it to keep JAVA_HOME, add it to env_keep: [131062890050] |Alternatively, set JAVA_HOME in root's ~/.bash_profile. [131062900010] |Run sudo with the -E (preserve environment) option (see the man file), or put JAVA_HOME in the install.sh script. [131062910010] |Installing opensuse through an USB [131062910020] |Hi, [131062910030] |Can anyone explain BRIEFLY as to how to install OpenSUSE through an USB?. [131062910040] |I searched in lot of forums, but i couldn't find anything helpful. [131062910050] |Moreover, there was something written about installing it with the dd_rescue command, but that doesn't seem to work. [131062910060] |So please give me a brief idea for installing OpenSUSE, through an USB. [131062920010] |If your BIOS supports 'boot from usb' as an option in the boot loader (cd, hdd, net, floppy etc). [131062920020] |You can create an bootable image on a usb drive. [131062920030] |I've done so with Debian, but here is a howto for OpenSUSE I pulled from a search result that should apply to you. [131062920040] |http://en.opensuse.org/SDB:Live_USB_stick [131062920050] |Once you have your image on the usb (note that it wipes your usb) just configure your bios, and reboot with the usb drive inserted. [131062920060] |It should boot from the usb and continue with the install. [131062930010] |Do you have an image file for the suse install you want? [131062930020] |Usually the process is: [131062930030] |
  • Get install disk image, from http://en.opensuse.org/
  • [131062930040] |
  • Extract that image onto a USB drive (Izarc/7-zip wll do (in windows), or double click on the file in linux/unix. [131062930050] |Or mount as a loopback and copypaste.)
  • [131062930060] |
  • Use a tool such as unetbootin or mkboot.bat (windows only) to make the drive bootable (you can google them)
  • [131062930070] |
  • Put the USB stick in the computer you want suse on, and make sure that computer can boot from usb drives, which could involve going into the bios.
  • [131062930080] |
  • follow instructions, assumign all went well.
  • [131062930090] |That is the best I can do without more information such as the operating system(s) you have now, what kind of computer you want suse on and if that computer has any OS already. [131062940010] |How to use three monitors on a laptop? [131062940020] |I have a laptop running Ubuntu 9.04 with a dual-head setup: laptop panel + external monitor. [131062940030] |I'd like to add another monitor but the laptop has a single VGA port, so my question is: what are my options for setting up a triple-head system on a Linux laptop? [131062940040] |(note that I'm open to "distributed display" systems such as xdmx, if it's not too messy to set up :) ) [131062950010] |You can use USB monitor. [131062950020] |IIRC the Linux have support for those. [131062950030] |You can also buy USB to VGA adapter. [131062950040] |In any case there may be some problems with graphic card etc. [131062960010] |ant script stops, waiting for input when run in background [131062960020] |I'm running an ant (Java build tool) script on CentOS 5.5 that execs another java process. [131062960030] |When I run the ant script in the background: [131062960040] |The forked process' state changes to stop and waits for input. [131062960050] |As soon as I bring the process to the foreground it starts again (no input required on my part) [131062960060] |This does not occur on other machines running the same OS, CentOS 5.5. [131062970010] |I found the answer. [131062970020] |A little googling brought up this page: [131062970030] |http://ant.apache.org/manual/running.html#background [131062970040] |Looks like ant immediately tries to read from standard input, which causes the background process to suspend [131062980010] |xorg performance in openoffice [131062980020] |I've just been monitoring my cpu usage in openoffice calc when cells have been copied vs when they haven't and seen a dramatic increase in cpu usage for the Xorg process. [131062980030] |The additional rendering required is a box with scrolling dashed lines around the cells that have been copied. [131062980040] |The issue stands regardless of whether the window is minimised. [131062980050] |Obviously it takes /some/ cpu power to render, but to increase an i7 by a more or less constant 7% usage seems slightly overkill. [131062980060] |If anything surely this should impact on the gpu? [131062990010] |I've found that turning off "Anti-Aliasing" speeds up OpenOffice on my system. [131062990020] |The setting is in Tools->Options, OpenOffice.org->View. [131062990030] |You might also want to experiment with turning on and off Hardware Acceleration. [131063000010] |How do you kick a user off your system? [131063000020] |I was googling this a bit ago and noticed a couple of ways, but I'm guessing that google doesn't know all. [131063000030] |So how do you kick users off your Linux box? also how do you go about seeing they are logged in in the first place? and related... does your method work if the user is logged into an X11 DE (not a requirement I'm just curious)? [131063010010] |There's probably an easier way, but I do this: [131063010020] |
  • See who's logged into your machine -- use who or w: [131063010030] |
  • Look up the process ID of the shell their TTY is connected to: [131063010040] |
  • Laugh at their impending disconnection (this step is optional, but encouraged) [131063010050] |
  • Kill the corresponding process: [131063010060] |I just discovered you can combine steps 1 and 2 by giving who the -u flag; the PID is the number off to the right: [131063020010] |As Micheal already pointed out, you can use who to find out who's logged in. [131063020020] |However if they have multiple processes, there's a more convenient way than killing each process individually: you can use killall -u username to kill all processes by that user. [131063030010] |Other useful command is pkill here pkill -u username &&pkill -9 -u username. killall have disadvantage that on Solaris IIRC it means something completely different - also pkill have slightly more advanced options. [131063040010] |Logout the user 'username': [131063040020] |See man skill [131063050010] |First of all, this indicates a larger problem. [131063050020] |If you have users that you don't trust on your system, you should probably level it and re-image. [131063050030] |With that in mind, you can do some or all of the following: [131063060010] |Xorg ignoring wine application [131063060020] |A wine application (Anarchy online, great game try it some time) is behaving very strangely. [131063060030] |While rendering correctly, Xorg isn't picking up on it's existence, and because of this the game only refreshes it's image when another application requests that portion of the screen to be updated. [131063060040] |For example, I will get maximum framerate if I move the window, or go into the compiz cube, while placing a terminal with top running in it behind the game will make that portion of the screen render at 3 fps (In the video I will link it's running top -d 1.0 hence the 1 fps) whereas a quickly updating window shows the game at a more reasonable framerate. [131063060050] |In the video I have uploaded you can see this strange behavior, as the output from the game and the top combine to basically split the game between fast fps and slow fps in real-time. [131063060060] |Video of Xorg/wine issue (AFAIK can only be opened in totem and VLC, recordmydesktop is bugged) [131063060070] |Wine bug report [131063060080] |Does anyone know how to fix this problem? [131063060090] |Quick xorg config file? [131063060100] |Recompile wine? [131063060110] |I'll settle for a cheap trick (Other than putting a looping terminal behind the game every time, that really drains CPU) [131063060120] |Edit: Turned out it was a d3d bug, fixable by compiling 1.2.2 instead [131063070010] |It was a d3d bug added in the partial fix to the 1.3.7 crashing in that same application, I compiled 1.2.2 and it now works. [131063080010] |zsh alias expansion [131063080020] |Is it possible to configure zsh to expand global aliases during tab completion? [131063080030] |For example, I have the common aliases: [131063080040] |but when I type for example cd .../some it won't expand to cd .../something or cd ../../something. [131063080050] |Consequently, I frequently won't use these handy aliases because I can't tab complete where I want to go. [131063090010] |Try looking up zsh abbreviations. [131063090020] |They allow you to enter an "abbreviation" which automatically gets replaced with its full form when you hit a magic key such as space. [131063090030] |So you can create one which changes ... to ../... [131063090040] |For example, this is what you need in your profile: [131063100010] |I have a custom ZLE widget for this, just drop it somewhere in a directory in $fpath. [131063100020] |You can then configure it this way. [131063110010] |How do you derive the passphrase from hexadecimal wep key? [131063110020] |I have an hexadecimal wep key in the setting of my router. [131063110030] |When I try to connect, system asks to me for the wep passphrase. [131063110040] |How can I derive it from my hexadecimal? [131063110050] |This is what I have: [131063110060] |How is the way to get it? [131063110070] |Thanx a lot and I apologize for my english. [131063120010] |Windows will also accept the hexadecimal key, no conversion is needed. [131063120020] |In fact, it is the recommended format. [131063120030] |From the documentation for configuring WEP in Windows XP: [131063120040] |If you are typing the WEP key using hexadecimal digits, you must type 10 hexadecimal digits for a 40-bit key and 26 hexadecimal digits for a 104-bit key. [131063120050] |If you have the choice of the format of the WEP key, choose hexadecimal. [131063120060] |However, you should type 84127D134D083189A4AF970721, without the : characters. [131063120070] |Those are just separators which are not actually part of the key. [131063120080] |Finally, note that your key is now useless for security as you have published it on the internet. [131063130010] |Can I easily search my history across many screen windows? [131063130020] |My current screen session has 12 open windows on it. [131063130030] |It's been running for weeks... [131063130040] |I know I executed an ImageMagick convert command in one of these 12 screen windows sometime last week... is there any way I can easily search through the Bash history of all 12 instances, without closing them or running history | grep convert in all 12 screens? [131063140010] |Sounds difficult. [131063140020] |Here are a couple of methods that may work for you. [131063140030] |If you have process accounting tools installed (on Linux, look for a package called acct) and the permission to use them, you can find out when and possibly on what terminal you ran convert: [131063140040] |If this is unavailable or unconclusive, you can execute a history command in each instance of bash to look for convert commands. [131063140050] |This will find commands that were in the history file when bash started as well. [131063140060] |
  • $(seq 0 11) iterates over the numbers of your screen windows. [131063140070] |Make sure to skip windows that are not currently running bash but some other process that would interpret input differently.
  • [131063140080] |
  • screen -p $w stuff … sends the following string as input to the specified window. [131063140090] |You need the newline at the end of the string.
  • [131063140100] |
  • The file /tmp/convert.history will contain a list of lines like 3 convert foo.jpg, if you ran convert foo.jpg in window 3.
  • [131063150010] |Bash only writes the history when it exits, which makes this problematic at best. [131063150020] |I've heard that zsh can share history between active sessions. [131063160010] |You can run history -a; history -c in all windows to save the history. [131063160020] |And then history -r to refresh it. [131063160030] |To solve it more permanently add this to your .bashrc: [131063170010] |"Too many arguments to function" error while installing php5-pdo_mysql from ports in FreeBSD [131063170020] |Hello, I'm having a trouble while installing php5-pdo_mysql in my FreeBSD environment. [131063170030] |Everytime I trys to do /usr/ports/databases/php5-pdo_mysql make install the console gives me this error: [131063170040] |My php version is 5.3.3 and pear 1.9.1. I tried to upgrade port tree of php5 to 5.3.5 but it failed. [131063170050] |Saying I have to update apache22 first, but then updating apache22 also failed. [131063170060] |What should I do now? [131063180010] |Why is echo "bar" -n different from echo -n "bar" ? [131063180020] |Compare [131063180030] |with [131063180040] |The former does what I think it's supposed to do (prints "bar" without a newline), while the latter doesn't. [131063180050] |Is this a bug or by design? [131063180060] |Why is it different than many cli programs in that you can't move the options around? [131063180070] |For example, tail -f /var/log/messages is exactly the same as tail /var/log/messages -f. [131063180080] |I sometimes do the latter when I forget I wanted the former, because internally the options and arguments are rearranged, usually by getopt. [131063180090] |Update: yes I originally nerf-ed my question. [131063180100] |I removed the nerf you'll have to view history to make some of the answers make sense. [131063190010] |From simple inspection, this is not a bug by design... [131063190020] |Try this command for yourself: [131063190030] |You will notice that the resulting md5 is [131063190040] |You just mixed-up the use of -n in echo. [131063190050] |In your first sample code [131063190060] |-n is used to say DO NOT to put a newline after this echo. [131063190070] |Your second sample [131063190080] |is treating -n as a literal text. [131063190090] |Hope that explanation helps :) [131063190100] |Ismael Casimpan [131063200010] |The other answers get to the point that you can look at the man page to see that -n is becoming part of the string to echo. [131063200020] |However I just want to point out that it's easy to investigate this without the md5sum and makes what's going on a little less cryptic. [131063210010] |Wow, everyone's explanations are lengthy. [131063210020] |It's this simple: [131063210030] |-n is an option if it appears before the string, otherwise it's just another string to be echoed. [131063210040] |Remove the | md5sum and you'll see the output is different. [131063220010] |As most others have observed, "-n" is interpreted literally if placed anywhere but immediately after the echo command. [131063220020] |Historically, UNIX utilities were all like this -- they looked for options only immediately after the command name. [131063220030] |It was likely either BSD or GNU who pioneered the more flexible style (though I could be wrong), as even now POSIX specifies the old way as correct (see Guideline 9, and also man 3 getopt on a Linux system). [131063220040] |Anyway, even though most Linux utilities these days use the new style, there are some holdouts like echo. [131063220050] |Echo is a mess, standards-wise, in that there were at least two fundamentally conflicting versions in play by the time POSIX came into being. [131063220060] |On the one hand, you have SYSV-style, which interprets backslash-escaped characters but otherwise treats its arguments literally, accepting no options. [131063220070] |On the other, you have BSD-style, which treats an initial -n as a special case and outputs absolutely everything else literally. [131063220080] |And since echo is so convenient, you have thousands of shell scripts that depend on one behavior or the other: [131063220090] |Because of the "treat everything literally" semantic, it's impossible to even add a new option to echo without breaking things. [131063220100] |If GNU used the flexible options scheme on it, hell would break loose. [131063220110] |Incidentally, for best compatibility between Bourne shell implementations, use printf rather than echo. [131063220120] |UPDATED to explain why echo in particular does not use flexible options. [131063230010] |What are some excellent Emacs utter beginner resources? [131063230020] |I administrate a couple of servers and do automation with both python and ruby at times. [131063230030] |I've seen some awesome Youtube videos of users with multiple open windows, logged in background chats, and deep code completion, all done with emacs. [131063230040] |I'm currently using vi and would like to learn emacs. [131063230050] |Altogether doing away with any and all forms of GUI (suggestions for text browsing in emacs welcome) would be perfect in my current environment, especially for decluttering and focus. [131063230060] |Content similar to http://platypope.org/yada/emacs-demo/tutorial.swf would be perfect. [131063240010] |There is emacs built-in tutorial, available by typing Ctrl+h, thent. [131063240020] |It can be considered as the vimtutor equivalent. [131063250010] |I started with Learning GNU Emacs [131063260010] |See the questions in stack overflow: [131063260020] |
  • What to teach a beginner in Emacs?
  • [131063260030] |
  • Good Resources For Emacs
  • [131063260040] |
  • Resources for learning Emacs
  • [131063260050] |
  • How to quickly get started at using and learning Emacs
  • [131063260060] |Other resources to check out when you get stuck: [131063260070] |
  • FAQ: C-h C-f
  • [131063260080] |
  • info pages: C-h i
  • [131063260090] |
  • Emacs Wiki
  • [131063260100] |
  • M-x apropos-documentation (search Emacs variable/function documentation)
  • [131063270010] |My strategy was to go through the tutorial, and after that just google all the time. [131063270020] |Now it seems time to go through the elisp manual; problems I have these days do seem to require knowing more about that. [131063270030] |(let me sneak in a recommendation for org-mode at http://orgmode.org) [131063280010] |A really good video to watch is: http://peepcode.com/products/meet-emacs [131063280020] |Does an excellent job of covering all the basics, but it'll cost some money to get. [131063280030] |I asked a similar question on StackOverflow and rather then me copying everything over check out: [131063280040] |http://stackoverflow.com/questions/2393787/any-good-emacs-intro-videos [131063290010] |How to disable playlist song hover info in Amarok? [131063290020] |Anyone know how to disable this annoying hover info in Amarok 2.4? [131063290030] |I browsed through all the options (not that there is much of them), but couldn't find how to turn that feature off... [131063290040] |It's getting in the way when hovering over songs with your mouse and really is not necessary. [131063290050] |Thank you. [131063300010] |I poked around and couldn't find anything either, so I asked on IRC [131063300020] |Mamarok: xenoterracide: this is addition tag info, it can't be disbled currently, there already is a bug report for it [131063300030] |So maybe it'll be disable-able in 2.5. [131063310010] |It may be turned off now, at least in 2.4-git. [131063310020] |However, make sure that the choice to allow the "tooltips" is unchecked in every playlist layout you use. [131063310030] |Playlist >Playlist Layouts. [131063320010] |How to merge patches [131063320020] |Sometimes I need to merge two patches. [131063320030] |I can apply them but often it is not convenient (coping, temporary files etc.). [131063320040] |Is there a tool to merge patches (assuming they are not conflicting)? [131063330010] |If you manage patches often, then you may be interested in quilt. [131063330020] |I believe it has a patch combination feature. [131063330030] |See Quilt (software) for more info. [131063340010] |If you're interested in maintaining the source, you could put it into a version control system, add patches to it, and get the patches you want out of it. [131063350010] |Any options to replace GNU coreutils on Linux? [131063350020] |I've been thinking about discontinuing the use of GNU Coreutils on my Linux systems, but to be honest, unlike many other GNU components, I can't think of any alternatives (on Linux). [131063350030] |What alternatives are there to GNU coreutils? will I need more than one package? [131063350040] |Links to the project are a must, bonus points for naming distro packages. [131063350050] |Also please don't suggest things unless you know they work on Linux, and can reference instructions. [131063350060] |I doubt I'll be switching kernels soon, and I'm much too lazy for anything much beyond a straightforward ./configure; make; make install. [131063350070] |I'm certainly not going to hack C for it. [131063350080] |warning: if you're distro uses coreutils removing them could break the way your distro functions. [131063350090] |However not having them be first in your $PATH shouldn't break things, as most scripts should use absolute paths. [131063360010] |I suspect you'd have a hard time getting rid of GNU Coreutils, however, there's always the equivalent BSD tools, although they aren't drop-in replacements for the GNU tools. [131063370010] |busybox the favorite of Embedded Linux systems. [131063370020] |BusyBox combines tiny versions of many common UNIX utilities into a single small executable. [131063370030] |It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. [131063370040] |The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. [131063370050] |BusyBox provides a fairly complete environment for any small or embedded system. [131063370060] |BusyBox has been written with size-optimization and limited resources in mind. [131063370070] |It is also extremely modular so you can easily include or exclude commands (or features) at compile time. [131063370080] |This makes it easy to customize your embedded systems. [131063370090] |To create a working system, just add some device nodes in /dev, a few configuration files in /etc, and a Linux kernel. [131063370100] |You can pretty much make any coreutil name a link to the busybox binary and it will work. you can also run busybox and it will work. [131063370110] |Example: if you're on Gentoo and haven't installed your vi yet, you can run busybox vi filename and you'll be in vi. [131063370120] |It's [131063370130] |
  • Arch Linux - community/busybox
  • [131063370140] |
  • Gentoo Linux - sys-apps/busybox
  • [131063380010] |Solaris (as of svn_140-something) would also be an option. [131063380020] |If you are using a distro you are crazy. [131063380030] |Stop now. [131063380040] |Seek psychiatric help. [131063380050] |If you are using LFS, rock on! [131063380060] |Have fun! [131063380070] |If you are making a distro, I applaud your bravery sir. [131063390010] |Why does LD keep outputting "no version information available" [131063390020] |On every loading of a lib, I get the error: [131063390030] |no version information available [131063390040] |This lib has been compiled on another PC (ubuntu 10.04) than the one running it (mandriva 2010.2). [131063390050] |Edit: the workaround didn't work. [131063400010] |That error, "no version information available", means that the version of libz that you linked against when you compiled the library is newer than the version on the mandrivia system you're using. [131063410010] |Select YeAH-TCP like congestion control algorithm to configure kernel [131063410020] |I'm configuring/compiling the 2.6.37 kernel and I want select YeAH-TCP like default congestion control algorithm but, although I enable this option (CONFIG_TCP_CONG_YEAH), it doesn't show in the congestion control algorithm list to select like default. [131063410030] |What's the sense of this? [131063410040] |Am I doing something wrong? [131063420010] |Does tcp_yeah appear in /proc/modules ? [131063420020] |If not, you need to modprobe tcp_yeah. [131063430010] |Unable to find Red Hat Server [131063430020] |
  • Resolved: Solved on my own. [131063430030] |Someone else working on the same server remotely made some adjusted to the httpd .conf files while he was on Vacation without notifying me.I've added the change/fix below in case someone ever has similar problems-
  • [131063430040] |SOLUTION: In var/etc/conf.d all the .conf files had their DocumentRoot set to the Drupal6 folder, instead of the Wordpress folder. [131063430050] |Running into a strange problem. [131063430060] |I have a local server running Red Hat on a virtual machine (accessed through putty). [131063430070] |I'm using this server to work on a website in Django, which I run with: [131063430080] |Now that works just fine, and I can access it through redHatIP:8080. [131063430090] |However, I recently downloaded and installed Wordpress, Drupal, and Mint. [131063430100] |Mint was added to the original site, and I access it through redHatIP/mint. [131063430110] |Wordpress and Drupal are completely unrelated to the original site, but are both located in /var/www/html (long with the original site located through my 8080 poty). [131063430120] |I accessed drupal and wordpress through redHatIP/drupal and redhatIP respectively, and it was working just fine until today. [131063430130] |For some reason when I go to redHatIP now I receive the following error in my internet browser: [131063430140] |If I go to redHatIP/drupal I received The requested URL /drupal was not found on this server, to redHatIP/anything I receive The requested URL /anything was not found on this server et cetera. [131063430150] |This applies to Mint as well, which I access through redHatIP/mint. [131063430160] |However, I can still access the original site through redHatIP:8080, and that works just fine. [131063430170] |I completely removed Django and wordpress, then downloaded them again. [131063430180] |I tried having just one of them located in /var/www/html/, but that did nothing. [131063430190] |I tried moving them to /var/www/, but that also did nothing. [131063430200] |When I go to my error log messages in /pwd/log/httpd/error_log It has the following message: [131063430210] |However this was reported the day before all this happened, when I had both Django and Wordpress together, and there hasn't been any message since. [131063430220] |I have been unable to fix this on my own, nor have I found a solution on the net. [131063430230] |Any ideas? [131063430240] |Any other information that would help understand the problem? et cetera? [131063440010] |
  • Resolved: Solved on my own. [131063440020] |Someone else working on the same server remotely made some adjusted to the httpd .conf files while he was on Vacation without notifying me. [131063440030] |I've added the change/fix below in case someone ever has similar problems.
  • [131063440040] |SOLUTION: In var/etc/conf.d all the .conf files had their DocumentRoot set to the Drupal6 folder, instead of the Wordpress folder. [131063450010] |How do you pronounce System V and SysV? [131063450020] |Is the V in System V and SysV (and sysvinit, etc) pronounced "vee" or "five"? [131063460010] |Since it is a roman numeral, "five" is probably the more correct pronounciation... [131063460020] |Wikipedia agrees as well: [131063460030] |Unix System V, commonly abbreviated SysV (and usually pronounced—though rarely written—as "System Five"), ... [131063470010] |How do I install mercurial on openSUSE? [131063470020] |Hi I need to install mercurial on my opensuse but I couldn't find the rpm so I just download mercurial.rpm and wanted to install it by using: [131063470030] |but it said to need python 2.6 so I downloaded python 2.6.0 and did the same but it said that needs the previous versions and the process failed. [131063470040] |I really don't know why it happens. [131063470050] |My question is that, am I going the wrong way or do I forget something? [131063470060] |I'm a beginner so please give me details. [131063470070] |Any help would be appreciated. [131063480010] |Check if you have yum installed by typing yum --version in your terminal prompt. [131063480020] |If you get something with a version number then you have it installed. [131063480030] |sudo yum install python - should install python. [131063480040] |Likewise, sudo yum install mercurial - should install mercurial. [131063480050] |EDIT-1: In case if you are not comfortable with command line method, open up the package manager and search for both of them and install it that way. [131063480060] |My guess is you don't have to download the rpm and install python or mercurial. [131063480070] |It should be available with the distro package manager itself. [131063480080] |EDIT-2: If you want to search for a package use - yum search . [131063480090] |If you don't know the full package name you can just use a part of the package name. [131063480100] |Other command is yum whatprovides . [131063480110] |For more commands refer here and here. [131063490010] |Modern Linux distributions include a package manager to resolve dependencies and provide a repository with software packages, thus avoid problems like you've just encountered. [131063490020] |On openSUSE you generally have a choice of methods to install a .rpm package. [131063490030] |Either on the command-line with zypper, yast, (if available yum) or over the graphical frontend YaST. [131063490040] |Note, that you have to append sudo to the following commands, or issue them as root. [131063490050] |zypper [131063490060] |yum [131063490070] |yast [131063490080] |yast provides an interactive console based GUI on which you can search and install software packages. [131063490090] |If you have a graphical frontend, you can also find YaST as a GUI in the menu. [131063490100] |All those package managers include the capability of searching for packages, so the next time you need one, use yast or the command search. [131063490110] |As an example: [131063490120] |will search and display all available perl packages. [131063490130] |If you just want to install one (or more) locally available .rpm packages, you can simply use the given tool rpm. [131063490140] |As you already saw, this will only work if all the dependencies are already installed. [131063490150] |See here for more information about package management on openSUSE. [131063500010] |Why does the Centos apache httpd-2.2.3 rpm remove the bundled apr,apr-util,pcre ? [131063500020] |I am manually building httpd-2.2.17 from the source. [131063500030] |Just to make sure I have the configuration options right, I checked the latest CENTOS apache srpm (which is for httpd-2.2.3). [131063500040] |In the httpd.spec I find this line: [131063500050] |I was wondering why this is required ? [131063500060] |What's wrong with using the apr included within the default httpd source ? [131063510010] |That's there because the apache RPM spec file has a "BuildRequire" for apr-devel, apr-util-devel and pcre-devel packages, and the packager wanted the build to use the packaged version rather than what's bundled in the apache tarball. [131063510020] |For what it's worth, here's the change that was made to add that line, perhaps that'll help answer your question: link text [131063510030] |That's an edit from 6 years ago, so it's not identical to a current package, but you can see elsewhere in the patch how using the apr-config from the packaged version of apr-devel is added. [131063520010] |They are packaged together for convenience to the user. [131063520020] |In a distro-maintened system, there are many other software that uses apr, apr-util and pcre, and it makes sense to install them separately. [131063520030] |Installing them separately saves memory (because you have only one copy of the library functions and data in memory) and is easier to upgrade (specially for security updates), since you don't have to redownload and reinstall all software that embedded those libraries. [131063530010] |Why did my sudoers file suddenly reset itself? [131063530020] |After setting up Wave in a Box on my Debian server, I started noticing strange things happening: I could no longer use sftp to transfer files, and even worse, I couldn't run any sudo commands! [131063530030] |Thankfully, I could still su to root. [131063530040] |When I checked /etc/sudoers, I found that it had been completely reset, with only the root user having sudoer permission. [131063530050] |I don't think my server was hacked, since I'm running denyhosts to shut out any attempts. [131063530060] |Any idea what could have caused /etc/sudoers and /etc/ssh/sshd_config to reset themselves? [131063540010] |The Debian system cares so as not to mess up with your configs, but who knows? [131063540020] |Did you recently upgrade your sudo (cat /var/log/apt/history.log)? [131063540030] |Which version do you have installed? [131063540040] |If you are running Debian Sid, you might have been adversely affected by an update to 1.7.4p4-6. [131063540050] |The upload was meant to fix this bug. [131063550010] |This could happen if the packages were purged and reinstalled. [131063550020] |Purging removes old config files and reinstall would bring you back to the default state. [131063560010] |Must an X11 server be installed for X11 forwarding over ssh to work? [131063560020] |I would expect that my local X11 server and sshd X11 forwarding turned on is all that is needed for X11 over ssh to work, but I haven't been able to get it. [131063560030] |Am I wrong about this? [131063560040] |Does the remote system need an X11 server? [131063570010] |You don't need an X server on the remote side of the X session, but you will need xauth, which is usually included in an X-related package (xorg-x11-xauth in RHEL and Fedora). [131063570020] |If you want to run any programs that use X libraries (or libraries that require the X libraries), you'll need X libraries on the remote end to execute those programs. [131063580010] |Errors from cfdisk with new external USB backup drive [131063580020] |I've picked up an HP SimpleSave sd500a backup drive. [131063580030] |This is a 2.5", 500GB drive. [131063580040] |It has a mysterious CD-like partition, but otherwise seems to contain a WD Scorpio Blue disk. [131063580050] |It seems that the CD-like partition is implemented in the enclosure's firmware, but I've no way to be certain of this. [131063580060] |I'm repartitioning the drive for the first time. [131063580070] |When attempting to open the drive using cfdisk /dev/sdb, it exits with status 4 after outputting this error message: [131063580080] |sfdisk -l is able to output info on the drive without errors: [131063580090] |Is the error from cfdisk any reason to question the stability of the drive or the compatibility of its firmware? [131063590010] |cfdisk reads the partition table of the device at startup, it will exit if the geometry of a partition is wrong. [131063590020] |You can force cfdisk to not read the existing partition table by adding -z: [131063590030] |This is a cfdisk specific behavior, fdisk will show a similar error but won't exit. [131063590040] |The stability of the drive is not affected, it's just a partition issue. [131063590050] |Alternatively use a partition tool like fdisk, parted or gparted. [131063590060] |I've just checked my own partition and the first one (/boot) also reported this error. [131063590070] |I never had any problems with it. [131063600010] |Supposing there isn't anything of value in there, remove the partitions and create a new one (either ext3 or ext4), and run e2fsck -c on it to check if it has bad blocks. [131063600020] |If that passes, and you happen to be paranoid, run e2fsck -cc to do a more thorough (and much looonger) test. [131063610010] |What is the function/point of "config.sub" [131063610020] |Hey, [131063610030] |I'm trying to install some software from the command line. [131063610040] |There is a file called "config.sub". [131063610050] |Am I supposed to use this for something? [131063610060] |I haven't been able yet to find out by searching online what this file is supposed to do. [131063610070] |I think part of the deal is I don't know how to ask the question correctly. [131063620010] |config.sub is one of files generated by autoconf. [131063620020] |Autoconf documentations states that it converts system aliases into full canonical names. [131063620030] |In short - you don't have to worry about it unless you're autoconf developer. [131063630010] |How can I change the language in chromium? [131063630020] |I've installed chromium, but it deeply sucks that it uses my mother tongue (german) in its UI and for websites by default. [131063630030] |I want the english back, like firefox did. [131063630040] |I'm using archlinux's default packages. [131063630050] |I looked into the settings dialogs, but I found nothing useful. [131063640010] |I use version 6.0.472.63 and I found Change font and language settings under Customize and control Chromium --> Options --> Under the hood. [131063650010] |There is a preference to set the language preference for web pages: in the “Preferences” dialog (which may be called differently in your version), in the “Under the Hood” tab, click “Change font and language settings”. [131063650020] |This doesn't give you full control (you can only select languages that Chrome knows about, and there won't be a * at the end to make the server fall back on whatever it has available if it doesn't happen to know your language). [131063650030] |In the Preferences file in your profile (i.e. typically ~/.config/google-chrome/Default/Preferences), you can tune the setting more finely: [131063650040] |{ "intl": { "accept_languages": "en_US,en_GB,en,de_DE,de,*", }, } [131063650050] |(The syntax is JSON. [131063650060] |Edit the existing "intl" section.) [131063650070] |The language of the user interface is determined by the LC_MESSAGES environment variable, which is the standard setting under unix. [131063650080] |At least, this is the case for Google Chrome 9.0.597.45 under Debian (from Google's apt repository); Chromium under Ubuntu lucid seems determined to speak to me in English. [131063660010] |How to check what causes gtk+ 2/gtk+ 3 clash in icedtea [131063660020] |I try to run icedtea on mixed gtk+ 2/gtk+ 3 system and I get a symbol clash detection [for those who don't know - gtk+ 2 detects if gtk+ 3 symbols are in scope and stops program from running if they do - for a good reason]. [131063660030] |I tried to find out which libraries are pulled into the scope by debugging but: [131063660040] |
  • Output of strace in open commands shows no loading of gtk+ 3
  • [131063660050] |
  • breaking on _dl_open (sorry I don't remember the exact name) did not show anything
  • [131063660060] |
  • info symbol on gtk_widget_device_is_shadowed shows nothing
  • [131063660070] |Possibly to complicated matter: [131063660080] |
  • Address returned g_module_symbol returns address that in memory map is not backed by file.
  • [131063660090] |Any idea how to track this clash? [131063670010] |Inittab seems to ignore remount,rw / [131063670020] |I have an embedded system that boots off compact flash and runs with the initrd.img ramdisk mounted as root. [131063670030] |When booting, it mounts the initrd image ok in read only mode but then when inittab runs, it seems to skip over the first mount command which is [131063670040] |I have /etc/fstab setup with the correct options as far as I know: [131063670050] |The system then manages to get me a command prompt and I can then login as root and type the mount command which works without a problem. [131063670060] |Furthermore, this same setup has worked on a seemingly identical piece of target hardware. [131063670070] |The difference is that I am creating the boot image from my laptop instead of the usual server that we use. [131063670080] |My laptop is running a newer version of grub which I use to make a bootloader for the image. [131063670090] |Perhaps I also have a newer version of genext2fs which is used to make the image used as the ramdisk. [131063670100] |The server is running FC10 but my laptop is using ubuntu so there must be some differences that I'm overlooking which is affecting mount or inittab. [131063670110] |Could it be something to do with /dev/null? [131063670120] |Why isn't the system remounting the ramdisk image and how can I fix it? [131063680010] |Are you sure your distribution reads /etc/inittab? [131063680020] |For example, Ubuntu now uses upstart, which uses a different set of configuration files. [131063680030] |Another thought: put another entry in there that runs mount and redirects the output to a file. [131063680040] |That will prove if inittab is being read, and if the filesystems are in the expected state at the time you're trying to run the remount. [131063690010] |I managed to solve my problem using two steps: [131063690020] |The first was to get rid of the null at the start of the line in inittab which just allows errors to be seen on the console. [131063690030] |This revealed that the error was to do with /proc/mounts. [131063690040] |I changed inittab so that the ::sysinit:/bin/mount -t proc /proc came above the remount,rw / stuff and it's now ok. [131063690050] |The mystery remains why the other system boots anyway with supposedly identical binaries of the kernel and busybox - I'm still thinking that genext2fs must set something up differently in my version such that the mount -o remount,rw command is happy to go ahead without /proc/mounts [131063700010] |Can't get pop down menus to do anything in KDE/QT application GUIs in Ubuntu (Gnome session) [131063700020] |I have a brand new install of Ubuntu 10.10 x64 in an attempt to solve the following problem from my previous 10.04 x86 install, but with the exact same problem persisting. [131063700030] |When trying to run Amarok or Skype, which I am told produce their GUI component via the KDE and QT frameworks ( I may be misusing these terms but I hope my point comes across ), both of these applications fail to produce menus when a menu producing button is clicked. [131063700040] |This of course, makes both of these apps useless. [131063700050] |I installed both apps through the Ubuntu software manager, so expected that any KDE runtime dependencies would have been taken care of already. [131063700060] |I then installed kdelibs manually from the console, which didn't appear to be installed already, but this didn't seem to change anything. [131063700070] |I installed a QT GUI settings management application to see if somehow menus were turned off or something for QT applications, but the QT settings application itself appears to suffer from the same lack of popdown menus... [131063700080] |I don't have this problem with any of my Gnome apps. [131063700090] |What is wrong here? [131063700100] |How can I make these applications work? [131063710010] |Solved! [131063710020] |It appears that the menus in question relied upon 'visual effects' being enabled in the 'Appearances' section of the user preferences. [131063710030] |Turning visual effects to either normal or extra thus fixes this problem. [131063720010] |Problem with xmobar [131063720020] |I'm setting up an xmonad window manager and I came across following problem - when I try to configure xmobar and run it it shows: [131063720030] |My configuration file [131063720040] |Configuration copied from haskell wiki. [131063720050] |My version of xmobar is 0.9.2. [131063730010] |When I try to run xmobar with your config I have the error [131063730020] |xmobar: user error (createFontSet) [131063730030] |It should be different, because I'm using version 0.11.1 [131063730040] |To fix that I have to change the line with font setting to [131063730050] |font = "xft:Liberation Mono:pixelsize=10" [131063730060] |Hope that helps :) [131063740010] |How to change Fedora 14 dual-monitor default behavior (No clone) [131063740020] |Hello, I am running F14 on my laptop, and I use it either with or without an external screen. [131063740030] |When the VGA screen is plugged in, everytime I reboot the system sets it up so that VGA1 is a clone of LVDS1 and I have to do a: [131063740040] |How can I change it to be the default behavior? [131063740050] |Thanks! [131063750010] |Assuming you're using Gnome, try using System->Preferences->Monitors. [131063750020] |KDE has a similar tool in System Settings->Display. [131063750030] |I don't know if hot-plugging makes any difference, but this tool will save your settings so anytime you reboot it'll use the same settings as last time. [131063760010] |What is yum equivalent of 'apt-get update'? [131063760020] |Debian's apt-get update fetches and updates the package index. [131063760030] |Because I'm used to this way of doing things, I was surprised to find that yum update does all that and upgrades the system. [131063760040] |This made me curious of how to update the package index without installing anything. [131063770010] |The check-update command will refresh the package index and check for available updates: [131063780010] |While "yum check-update" will refresh the local cache, if it needs to be refreshed, so will most other commands. [131063780020] |The command that's strictly the equivalent of "apt-get update" is "yum makecache" ... however it's generally not recommended to run that directly, in yum. [131063790010] |How to access network without NetworkManager? [131063790020] |If I don't have NetworkManager installed on Debian, I edit "/etc/network/interfaces" and then run ifup eth0. [131063790030] |I see that things are different in Fedora 14 (the directory and the command aren't available). [131063790040] |How do I do it? [131063800010] |You can continue to use the 'network' service. [131063800020] |Just run [131063800030] |and [131063800040] |and it will start the interfaces that are set up in /etc/sysconfig/network-scripts/ifcfg-*. [131063800050] |You might want to check the settings in those files to make sure that the interface you want to start automatically on boot is set "ONBOOT=yes". [131063800060] |For an interface to edit those files, you can use 'system-config-network' which is part of the system-config-network package (if not already installed). [131063810010] |How come I don't have package updates in Fedora? [131063810020] |I last updated my Fedora 14 install weeks ago, so I expected a slew of new packages waiting for me, but after I run yum update, I instead get: [131063810030] |update: [131063810040] |rpm -qi sudo gives version 1.7.4p4 [131063810050] |yum -v repolist gives: [131063820010] |Your repository cache is out of date, clean it and execute the update again: [131063830010] |Are there any apps that will post to multiple social networking accounts at once? [131063830020] |I currently use Ping.fm to post to multiple social networks at once, but it's been irritating me for a while and I'm thinking of writing a tool to replace it for myself. [131063830030] |But I figure it's possible that someone has already done that. [131063830040] |To be a suitable replacement it needs to handle twitter, and facebook; myspace, linkedin, and google buzz are a bonus, of course everything else is frosting. [131063830050] |In addition to allowing me to send messages it must also be able to read multiple rss feeds and aggregate them to the networks I set up. [131063830060] |If 2 or more tools, in conjunction can be made to do this that is also acceptable, e.g. feedreader | cleanupforstream | post2stream. [131063830070] |If it doesn't exist I think I'll just write some Perl to do it for me, and put it on CPAN. [131063840010] |Gwibber has the ability to send to multiple services at once. [131063840020] |According to their website it supports the following protocols/services: [131063840030] |
  • Twitter
  • [131063840040] |
  • Identi.ca/StatusNet
  • [131063840050] |
  • Ping.fm
  • [131063840060] |
  • Facebook
  • [131063840070] |
  • FriendFeed
  • [131063840080] |
  • Buzz
  • [131063840090] |
  • Digg
  • [131063840100] |
  • Flickr
  • [131063840110] |
  • Qaiku
  • [131063840120] |As far as I know, it has the ability to receive content from all of the listed services, but I'm not sure if there is a way to receive arbitrary RSS feeds. [131063850010] |Multiple-boot from ISO files does not show OS menu [131063850020] |I used instructions from PenDriveLinux.com to create a multiple-boot USB drive with some ISO images on it. [131063850030] |I used Xubuntu 10.10 Desktop image and a Linux Mint 9 XFCE image. [131063850040] |I was able to boot either of the two operating systems. [131063850050] |Each one booted directly to the desktop however. [131063850060] |If I boot *Ubuntu from an ISO image which has been "burned" to a CD or a USB, I am presented with a menu prompting me to install or try the OS, test memory, etc. [131063850070] |Why does booting from the ISO go directly to the desktop, whereas the other method presents the OS menu first? [131063850080] |Update [131063860010] |It's due to the bootloader setup on the potentially-multi-boot USB drive. [131063860020] |The Grub configuration for the drive is set up to boot the various OSes directly: it contains entries like [131063860030] |Such an entry boots directly into the indicated operating system, bypassing the bootloader inside the ISO. [131063860040] |I think it would be possible to switch to a different configuration file with configfile (loop)/path/to/grub.cfg, if the bootloader inside the ISO is also Grub2 (which is not so common on CDs). [131063860050] |Loading the bootloader inside the ISO would be difficult, as the bootloader would have to understand where to find its components. [131063870010] |You are using an old version of grub. sudo apt-get install grub2 and then try again [131063880010] |How to check available package versions in rpm systems? [131063880020] |If I want to check available versions of a package in Debian, I run apt-cache policy pkgname which in the case of wajig gives: [131063880030] |That means that there are three wajig packages, one that is installed (/var/lib/dpkg/status), and two others (which are the same version). [131063880040] |One of these two is in a local repository and the other is available from a remote repository. [131063880050] |How do I achieve a similar result on rpm systems? [131063890010] |To query the available packages, you can do urpmq --sources YOURPACKAGE This is Mandriva-specific (I only know Mandriva). [131063890020] |If you want to know the version of an installed package : rpm -q YOURPACKAGE This works on all RPM systems. [131063890030] |On RedHat/Fedora, see yum. [131063900010] |yum [131063900020] |Provides the command list to display information about installed and upgradeable package. [131063900030] |zypper [131063900040] |Can return a detailed list of available and installed packages or patches. [131063900050] |Adding --exact-match can help, if there are multiple packages. [131063900060] |As a side-note, here is a comparison of package-management commands. [131063910010] |How to generate a report summary of messages that triggered a specific DSN code. [131063910020] |CENTOS 5.x | Sendmail [131063910030] |Hello All, [131063910040] |I hope this is a simple question. =) I need to generate a report summary of messages that triggered a specific DSN code. [131063910050] |For example: [131063910060] |Normally, I would just grep for this information (something like: grep -i "dsn=5.7.1" /var/log/maillog). [131063910070] |But the problem is that this only returns a line like above and doesn't tell me the sender of the message. [131063910080] |Ideally, I'm looking for a one-liner that can do the following: [131063910090] |
  • Search sendmail maillog for specific DSN.
  • [131063910100] |
  • Identify the message-id for the email. [131063910110] |(I'm guessing awk '{print $}' would be used?)
  • [131063910120] |
  • Return the message details for each (presumably grepping for the the message id retrieved from step 2).
  • [131063910130] |I'm a n00b at scripting/one-liners so I'm sure there's probably an easier way to do this. [131063910140] |Any thoughts? [131063910150] |Thanks, [131063910160] |-M [131063920010] |how to "unswap" my desktop [131063920020] |Hi, [131063920030] |If my desktop run out of memory and swaps a lot then I free or kill the application wasting my RAM. [131063920040] |But, after that, all my desktop/applications have been swapped and are horribly slow, do you know a way to "unswap" my desktop/applications ? [131063920050] |Thanks. [131063930010] |If you really have enough RAM available again you can use this sequence (as root): [131063930020] |(to force the explicit swap-in of all your applications) [131063930030] |(assuming that you are using linux) [131063940010] |swapon/swapoff will completely clear your swap space, but you can free some of it via the /proc file system too. [131063940020] |You want the first one: [131063940030] |via http://linux-mm.org/Drop_Caches [131063950010] |The following quick-and-dirty python script dumps the memory of a process to stdout. [131063950020] |This has the side effect of loading any swapped out page or mapped file. [131063950030] |Call it as cat_proc_mem 123 456 789 where the arguments are process IDs. [131063950040] |This script is completely specific to Linux. [131063950050] |It may be adaptable to other systems with a similar /proc structure (Solaris?), but forget about running it on e.g. *BSD. [131063950060] |Even on Linux, you may need to change the definition of c_pid_t and the values of PTRACE_ATTACH and PTRACE_DETACH. [131063950070] |This is a proof-of-principle script, not meant as an example of good programming practices. [131063950080] |Use at your own risk. [131063950090] |Linux makes the memory of a process available as /proc/$pid/mem. [131063950100] |Only certain address ranges are readable. [131063950110] |These ranges can be found by reading the memory mapping information from the text file /proc/$pid/maps. [131063950120] |The pseudo-file /proc/$pid/mem cannot be read by all processes that have the permission to read it: the reader process must have called ptrace(PTRACE_ATTACH, $pid). [131063950130] |See also more information on /proc/$pid/mem. [131063960010] |How do I recursively apply PKGREPOSITORY when calling make package-recursive in FreeBSD? [131063960020] |I'm trying to create a package of Apache and its dependencies: [131063960030] |Everything works fine; Apache and its dependencies compile and install and apache22.tbz is in gvkv. [131063960040] |The problem is that the dependency packages are built in their respective ports/ directories! [131063960050] |There are about fifteen of them and while it's easy enough to retrieve them with find and a perl one-liner, surely there must be a way to tell make to run in an environment such that the dependency packages end up in gvkv. [131063970010] |Part of the fun of using FreeBSD is dealing with the ports subsystem. [131063970020] |It's good in many ways--easy installation and upgrading within the FreeBSD ecosystem but poor in others--setting variables via make configuration files or environment don't work as expected or even as advertised. [131063970030] |Nevertheless, SirDice has come to the rescue with a neat little trick: [131063970040] |which is really cool because it bypasses the use of environment variables and you can set the destination directory for wherever you want. [131063970050] |Very helpful if you're (like me) using a 'build-jail' to make packages that are installed on different systems or other jails. [131063970060] |The /usr/ports/packages directory is where PACKAGES points to (if it exists) which is supposedly able to point somewhere else but didn't work for me. [131063970070] |The man page stipulates setting PKGREPOSITORY which only works if you are building a single package. [131063980010] |I've read that that PKGREPOSITORY is dependent on PACKAGES. [131063980020] |You can set PACKAGES in /etc/make.conf . [131063980030] |For example: [131063990010] |How do I reimage OpenWRT? [131063990020] |How do I reimage openwrt in such a way that all my settings will be lost. [131063990030] |I've been having some issues, and I want to ensure that it's not a lingering setting, I want this to be a fresh install. [131064000010] |OpenWRT versions from Kamikaze onwards (which is basically Kamikaze and Backfire, but not White Russian) do not use NVRAM to store settings or configuration. [131064000020] |It is all stored in the filesystem, either in the base squashfs image or the overlayed jffs image. [131064000030] |This means you should be able to re-flash the image and get back to "factory defaults". [131064000040] |The way to flash an OpenWRT image is described at http://wiki.openwrt.org/doc/howto/installing . [131064000050] |Once you have OpenWRT installed the first time, the easiest way to reflash is to use the "via the OpenWrt command line" method. [131064000060] |Pay attention to the differences between .trx images and .bin images. .trx images are "raw" generic openwrt images used by the command line installation method. .bin images have vendor-specific headers and, so you need to have the appropriate image for your router. [131064000070] |There are some settings stored in NVRAM that are used by the bootloader but I don't think they should persist once the Linux image has booted. [131064000080] |Possibly MAC addresses may persist, but can be overridden in the filesystem configuration anyway. [131064000090] |Whatever you do, do not indiscriminately wipe the NVRAM. [131064000100] |You will almost brick the device, and may be bricked unless you can find on the net the appropriate settings to restore manually for your device. [131064010010] |Open Source router firmware options? [131064010020] |What distro's exist that are designed for routers? [131064010030] |Please include the following [131064010040] |
  • link to the project page
  • [131064010050] |
  • link to supported hardware list
  • [131064010060] |
  • what distinguishes them, why pick this option
  • [131064010070] |
  • friendly web interface?
  • [131064010080] |
  • above friendly interface easily disabled?
  • [131064010090] |
  • package management for software not initially installed? (e.g. ipkg/opkg)
  • [131064010100] |
  • good documentation?
  • [131064020010] |OpenWrt is a powerful distribution for open source routers. [131064020020] |It supports for a lot of devices. [131064020030] |A 2.6er kernel, an 2.4 is available, too. [131064020040] |The web-interface is surprisingly useful, e.g. it supports switching between normal and advanced mode, in the advanced mode it supports more options and transactions for a set of configuration changes. [131064020050] |It can be easily disabled. opkg is initially installed. [131064020060] |There is some documentation. [131064020070] |They have a manual. [131064020080] |Regarding hardware support the wiki has a lot of information. [131064020090] |You can find a lot of useful stuff via a google search in their web forums (why can't they use mailinglists like a normal open source project?). [131064020100] |what distinguishes them, why pick this option [131064020110] |OpenWrt has a history of continuous development. [131064020120] |It is not a fork, where you have to worry, if you get updates (i.e. current releases). [131064020130] |ATM various firmware images of the current release for different hardware devices are available for flashing - no need to setup a cross-compile environment, figure out a sane default configuration etc. [131064020140] |A lot of setups are supported out of the box (e.g. bridging, non-bridging, vlan tagging, pppoe, UMTS sticks etc.) - the web-interface is impressive - even if you don't plan to use it for regular stuff, it demonstrates powerful configuration possibilities of the base system. [131064030010] |Wikipedia's list of projects [131064040010] |I'd suggest Gargoyle as in my opinion it's much easier to update and manage. [131064040020] |Not to mention it has live graphic statistics on its web interface and Dynamic DNS support at the router level should you need it. [131064040030] |
  • Gargoyle is based off of OpenWRT and uses the same hardware table. [131064040040] |See installation manual for more details (read carefully).
  • [131064040050] |
  • I do not believe you can disable the web interface (possibility exists), but you can SSH into the router.
  • [131064040060] |
  • Unsure of kernel version
  • [131064040070] |
  • Documented not as throughly as OpenWRT, but gargoyle is based off of it so can defer to that for more specific information.
  • [131064040080] |
  • Unsure on package management
  • [131064040090] |
  • easy to update through the web interface
  • [131064040100] |Gargoyle page [131064040110] |supported hardware [131064050010] |Can't shutdown mandriva 2010.2 [131064050020] |I can't seem to shutdown my machine from mandriva,I have to restart to shut it down from GRUB [131064050030] |it says "System Halted" and just freezes there [131064050040] |I'm using the x86_64 GNOME version [131064050050] |I've already asked on mandriva forums and they couldn't determine the cause of this problem [131064050060] |P.S. [131064050070] |I'm using the 2.6.36.2-desktop-2mnb kernel [131064060010] |It turns out that the cause of the problem was ACPI,it was switched off [131064060020] |turning it back on solved the problem :) [131064060030] |just in case somebody has a similar issue [131064070010] |On-the-fly monitoring HTTP requests on a network interface? [131064070020] |For debugging purposes I want to monitor the http requests on a network interface. [131064070030] |Using a naive tcpdump command line I get too much low-level information and the information I need is not very clearly represented. [131064070040] |Dumping the traffic via tcpdump to a file and then using wireshark has the disadvantage that it is not on-the-fly. [131064070050] |I imagine a tool usage like this: [131064070060] |I am using Linux. [131064080010] |You can use httpry or Justsniffer to do that. [131064090010] |I think Wireshark is capable of doing what you want [131064100010] |Try tcpflow: [131064100020] |Output is like this: [131064100030] |You can obviously add additional HTTP methods to the grep statement, and use sed to combine the two lines into a full URL. [131064110010] |Check package version using apt-get/aptitude? [131064110020] |Before I install a package I'd like to know what version i would get. [131064110030] |How do I check the version before installing using apt-get or aptitude on debian or ubuntu? [131064120010] |apt-get [131064120020] |You can run a simulation to see what would happen if you upgrade/install a package: [131064120030] |To see all possible upgrades, run a upgrade in verbose mode and (to be safe) with simulation, press n to cancel: [131064120040] |apt-cache [131064120050] |The option policy can show the installed and the remote version (install candidate) of a package. [131064120060] |apt-show-versions [131064120070] |If installed, shows version information about one or more packages: [131064120080] |Passing the -u switch with or without a package name will only show upgradeable packages. [131064120090] |aptitude [131064120100] |The console GUI of aptitude can display upgradeable packages with new versions. [131064120110] |Open the menu 'Upgradable Packages'. [131064120120] |Pressing v on a package will show more detailed version information. [131064120130] |Or on the command-line: [131064120140] |Passing -V will show detailed information about versions, again to be safe with the simulation switch: [131064120150] |Substituting install with upgrade will show the versions from all upgradeable packages. [131064130010] |Do I need pata_atiixp or ata_generic kernel modules on a SATA only system? [131064130020] |I don't have any IDE drives and my only SATA hard drive is running in AHCI mode, but my initrd image loads the pata_atiixp module. [131064130030] |Is it safe to disable this module? [131064130040] |And what about the ata_generic onde? [131064140010] |To answer the first question: Yes [131064140020] |But anyway, it should be easy to generate a backup entry in your boot manager (with the original initrd and working kernel), in case something goes wrong. [131064140030] |To answer the second one - you can use [131064140040] |On your running systems to see, if ata_generic is loaded and if it is, which modules depend on it (look at the used by column. [131064150010] |Is it possible to rename a unix user account? [131064150020] |I installed an ubuntu on a computer that is now used by somebody else. [131064150030] |I renamed the account with her name, but it only changes the fullname, not the user name, which is still displayed in the top right (in the fast user switch applet). [131064150040] |Is there a command to rename a unix user account? [131064150050] |It doesn't seem so… [131064150060] |I've thought of creating a new user account with the new name, and then copying everything in the "old" home to the home of the new account. [131064150070] |Would it be enough? [131064150080] |But then I think the files would have the old account's permissions' owner? [131064150090] |So should I do chown -R newuser ~? [131064150100] |Is there a simpler/recommended way to do this? [131064150110] |Thanks [131064160010] |Try [131064160020] |The -m option moves the old home directory's contents to the new one given by the -d option which is created if it doesn't already exist. [131064160030] |See the man page for more info: [131064170010] |Diacritics do not work in GTK+ applications running on Mac OS X Snow Leopard [131064170020] |I am using some GTK+ applications in Mac OS X Snow Leopard. [131064170030] |My MacBook is configured for using the Brazilian layout, which allows it to enter diacritics as dead keys (so I can write á typing ' and then a, for example). [131064170040] |However, it does not work in some GTK+ applications - in this case, Gedit and GnuCash. [131064170050] |In other ones, such as Inkscape and Dia, I can enter diacritics as dead keys. [131064170060] |It is valid to note that Dia and Inkscape runs through X11 but Gedit and GnuCash (at least my versions) do not. [131064170070] |Does anybody have such problem? [131064170080] |Has someone solved it? [131064170090] |How could I configure the keyboard layout for GTK+? [131064170100] |Do I do it in some ~/.gconf* or ~/.gnome file? [131064170110] |Thanks in advance? [131064180010] |Straightforward Linux Clustering [131064180020] |We have many unused PC machines and we would like to use them to set up educational lab for high performance computing applications. [131064180030] |Which Linux distribution is the most convenient to set up and easy to manage in educational environment? [131064180040] |I would be thankful if someone provides me with a list of advantages and disadvantages of different Linux clustering distributions. [131064190010] |We have a small cluster that has openSUSE as its base distro, but I do not think it is too important. [131064190020] |Ubuntu looks like a viable alternative and has quite a bit of documentation and community support. [131064190030] |On top of linux, we run Sun Grid Engine (and our cluster even includes Mac OS machines pretty seamlessly), but slurm would probably work for a simple setup. [131064190040] |We share home directories and /usr/local via NFS from a central server. [131064190050] |It works just fine for us. [131064190060] |More details are available on our website. [131064200010] |There's the rocks linux distro which is made for clustering, and is based on CentOS/RHEL. [131064200020] |The strong point of rocks is that it'll for the most part manage and do a lot of the minutia for you. [131064200030] |
  • It'll do automatic installation and reinstallation, and if your computers can boot via PXE, the initial install will consist of PXE booting your nodes. [131064200040] |If you have a large number of compute-nodes, they use bittorrent internally for distributing packages, which removes a significant bottleneck for (re)installing the entire thing.
  • [131064200050] |
  • It'll give you a very homogeneous compute-environment by default.
  • [131064200060] |
  • By default it'll set up and use NFS internally, and there's options for using PVFS2 (which I haven't tried).
  • [131064200070] |
  • As for queueing/batch systems, it should set up and manage this for you, by default I think it uses SGE, there's also a roll (their software bundling format) for torque.
  • [131064200080] |
  • It'll ensure consistency in users/groups/etc. across your cluster
  • [131064200090] |
  • It'll graph resource utilization through ganglia
  • [131064200100] |If I were to dig up downsides [131064200110] |
  • Adding/removing software from the compute-nodes involves reinstalling them (although, it does ensure homogeneity).
  • [131064200120] |
  • Adding/removing software involves either adding a roll (their way of bundling rpms/appliances), or editing xml-files. [131064200130] |However, it's fairly well documented so if you're willing to put some effort into reading the documentation you should be ok. [131064200140] |Plus there's a mailing-list if you get stuck.
  • [131064200150] |
  • It's based on CentOS/RHEL, which is a little behind "bleeding edge"
  • [131064200160] |
  • It'll (mostly) force you to do things "their way", minor changes you might get away with maybe modifying some of the xml-config files, major changes might have to be implemented through making, adding or modifying rolls (their sw/addon format)