[131008750010] |Nginx reverse proxy - how to whitelist / sanitize URL queries? [131008750020] |Using Nginx as a reverse proxy. [131008750030] |I wish to ensure that only valid URLs are sent through to back end servers. [131008750040] |What's the best way to handle this? [131008760010] |Keep newlines in output of cut [131008760020] |Let's imagine we have a file: [131008760030] |I want to cut every second word in every line, so I want file to look like this: [131008760040] |I am trying to use cut -d ' ' -f -1 but what I get is: [131008760050] |How can i preserve line endings in the file? [131008770010] |file contents [131008770020] |getting the first column only [131008770030] |getting the first and third column, this time from stdin through a pipe [131008770040] |EDIT: I think I know what you're doing wrong [131008780010] |Mass .flac --> .mp3 transcoding: How to write a shell script that preserves ID3 tag information? [131008780020] |In recent weeks I've gone from a fairly 'hands-on' approach to .flac --> .mp3 transcoding, to one that's far more 'set &forget'. [131008780030] |The first step was to stop using a GUI front end (Audacity with a LAME plug-in) and instead use the method I outlined here. [131008780040] |The second step was to find a bash shell script that would tell that command loop to work recursively, allowing directories with many subdirectories containing .flac files to be transcoded in one simple step. [131008780050] |That answer was provided by a user at askubuntu.com. [131008780060] |Now I wish to learn how to further refine things so that ID3 tag information is preserved. [131008780070] |The methods linked to above strip ID3 tag data, leaving the bare minimum (i.e. only the title field remains). [131008780080] |Can anyone teach me how to write such a shell script? [131008780090] |Many thanks in advance. [131008780100] |The shell script has been updated thus: [131008780110] |Doing find . -name '*.flac' -exec ~/bin/flac2mp3 '{}' \; in ~/Desktop/stack gives the following output: [131008780120] |id3info for the original .flac and resultant .mp3 gives, respectively: [131008780130] |(i.e. nothing); [131008780140] |The .flac definitely has tag information. [131008780150] |I can verify this by opening up EasyTAG. [131008780160] |EasyTAG refers to this as 'FLAC Vorbis Tag' but 'ID3 Tag' for the .mp3. [131008780170] |Is this the problem? [131008800010] |Use math.h and link objects files without -lm. [131008800020] |Hi, [131008800030] |I am currently reading Advanced Linux Programming and in the section 2.3.3 it is said that if I use some function from math.h I have to link the object files with -lm. [131008800040] |But I am pretty sure that I already used some math function like sqrt, pow or log without having to specify the use of this shared library. [131008800050] |Do you see where the problem is? [131008800060] |Thank you [131008810010] |You should always use -lm when using functions from math.h if you want to keep your code/makefiles portable. [131008810020] |Some of the things in that header are macros (which obviously don't need additional libraries), but which are is not specified (except for a few ones). [131008810030] |Some other functions might be implemented as build-ins by your compiler (even replaced by processor-specific opcodes directly), inlined, etc... so the correct linking of your code could also depend on optimizer settings and the exact compiler/version you're using. [131008810040] |For example: [131008810050] |With GCC 4.5.1 on Linux: [131008810060] |So to save yourself a headache, just add -lm. [131008820010] |How do I force remove a package in Arch with pacman? [131008820020] |How do I force remove a package in Arch with pacman while other packages still depend upon it. [131008820030] |basically LWP 6 split a whole bunch of packages, and I need to remove it so I can reinstall it. [131008830010] |You should be able to reinstall the package with a simple: [131008830020] |This will only remove perl-libwww: [131008830030] |Please notice the double -d in the command, if you use --nodeps you have to specify that twice too or combine it with a -d like: [131008830040] |This removes all the packages which depend on perl-libwww: [131008830050] |From pacman's man page: [131008840010] |Xubuntu: Restoring the notifcation area in xfce4 [131008840020] |While playing with xfce4 (on Xubuntu) I accidentally removed the notification area which is placed by default in the right of the top panel. [131008840030] |Is there a way to restore it back to it's original state without diving into the process of adding, placing and adjusting icons? [131008840040] |EDIT [131008840050] |Actually, I'm looking for restoring the notification area to it's default original state by restoring its state "meta data" stored in the my /home directory, how can I accomplish that with the command line? [131008840060] |Where should I go and what should I restore? [131008840070] |Thanks [131008850010] |It's just one widget, so you can just add the Notification Area panel item using the Panel configuration. [131008850020] |You can drag-and-drop it in XFCE4 in versions prior to 4.8. [131008850030] |With 4.8 you there is a + putton in the Items tab. [131008860010] |From the command line, try moving ~/.config/xfce4/panel to another directory (e.g. ~/tmp/): mkdir -p ~/tmp; mv ~/.config/xfce4/panel ~/tmp/. [131008860020] |But a more "safe" way would be to create a new panel (right click on the panel and choose "Customize Panel...". [131008860030] |From there, create a new panel with the appearance and position of the one you have now. [131008860040] |When that has been created, delete the first panel. [131008870010] |Vim: word wrap for documents [131008870020] |Whats the best way to do word wrapping for a document in vim? [131008870030] |I'd like have each line wrap at 80 characters, not just visually wrap but enter a newline. [131008870040] |I got this kind of working by using [131008870050] |But if I go back and insert text on a previous line, it doesn't adjust the rest of the paragraph. [131008870060] |It takes the overflow text and puts it on a newline, but the line that comes after doesn't get moved up to fill in the gap. [131008870070] |So if I edit a previous line, my paragraph can look like this [131008870080] |I want it to be like this: [131008880010] |wrap and linebreak control the display of text, I think you'll find they don't actually insert newlines in the file. [131008880020] |To get vim to insert newlines in the file as you type, set textwidth to the desired width (e.g. 80). [131008880030] |That will still not automatically reflow subsequent lines when you insert more text. [131008880040] |I usually do that manually with gq}, but I just discovered that set formatoptions+=a will tell vim to do it automatically. [131008880050] |See the help for auto-format. [131008890010] |The gq} wraps a paragraph to textwidth. [131008890020] |Be sure to set tw=80 first. [131008890030] |Many distros map that to Q. [131008890040] |So you may also be able to also use Q} instead. [131008900010] |I use par for formatting, it can even word wrap with an existing prefix, in the context of emails for example. [131008910010] |what is it that clobbers my letters together in gedit? [131008910020] |sometimes after I've been editing a textfile in gedit the letters clobber together like this: [131008910030] |what is it that does that and how can i stop it from happening? [131008920010] |Seems to be an effect of this bug: https://bugzilla.gnome.org/show_bug.cgi?id=127731 [131008920020] |The bug is triggered when you have a very long line (something over 500k chars). [131008920030] |You can stop it from happening by inserting some line breaks. [131008920040] |If you really need the long line without line breaks then you will have to use another editor until the bug is fixed. [131008930010] |CentOS command issue [131008930020] |This is what I am trying to run in VMware using CentOS 5 [131008940010] |You're specifying the arguments incorrectly. [131008940020] |If you look at the pwnat home page (or indeed actually read the help text that's displayed) you'll see that there are no -p, --proxyhost, etc. args. [131008940030] |(These are simply supplied after the single -c option's IP address.) [131008940040] |There are some examples on the pwnat home page I've linked to above that should help if you're still stuck. [131008950010] |Where did you get --proxyhost from?? [131008950020] |The documentation gives an example: [131008950030] |So try this: [131008960010] |What could cause which to not show something in the path? [131008960020] |There is an executable in my path that I believe is a perl script. [131008960030] |but [131008960040] |and the path it prints is the same that echo $PATH gives (except space delimited rather than colon). [131008960050] |Running ksh again does not appear to change my path, but now the script is not found. [131008960060] |This is as a normal user running ksh on AIX 6.1 over telnet (yeah, I know). [131008970010] |which is a csh script on AIX, and it might use a different path. [131008970020] |But that doesn't seem to be exactly your problem, since which prints the expected path. [131008970030] |Are you absolutely sure you didn't have a different path in the first ksh session? [131008970040] |Running ksh again might have changed the PATH because it ran ~/.kshrc or $ENV. [131008970050] |(And just to be sure, does the script still exist, or could it simply have been deleted in the meantime?) [131008980010] |Running type $scriptname told me that it was an alias. [131008980020] |Apparently this prevented it from being found by which. [131008990010] |What is .emacs? How do I edit it? [131008990020] |In this article, I don't understand step 7: [131008990030] |7. Add (load-library “init_python”) in your .emacs [131008990040] |How can I do this? [131009000010] |To quote the Emacs manual on The Init File: [131009000020] |When Emacs is started, it normally tries to load a Lisp program from an initialization file, or init file for short. [131009000030] |This file, if it exists, specifies how to initialize Emacs for you. [131009000040] |Emacs looks for your init file using the filenames ~/.emacs, ~/.emacs.el, or ~/.emacs.d/init.el; you can choose to use any one of these three names. [131009000050] |Here, ~/ stands for your home directory. [131009000060] |Originally, Emacs looked only for ~/.emacs. [131009000070] |The variant names were introduced in more recent versions of Emacs. [131009000080] |So, in this case: [131009000090] |Type C-x C-f ~/.emacs to load or create your .emacs file. [131009000100] |Type (load-library “init_python”) into that file. [131009000110] |Type C-x C-s to save it. [131009010010] |How can I apply caret-substitution to my nth-last command? [131009010020] |I regularly find myself having to execute a lengthy command on a file, then process the results with other commands. [131009010030] |To process the next file, I usually rerun the same command by hitting the Up key until I find the command I want and arduously replace the old filename with the new filename. [131009010040] |Is there a way to combine caret substitution (^oldfile^newfile) with the n th-last command? [131009010050] |I have (unsuccessfully) tried to pipe the n th-last command into the substitution like so: [131009010060] |Of course, I am open to other suggestions. [131009010070] |These little shortcuts really help with productivity... [131009020010] |You could write lengthy command as a shell function that takes a filename as a parameter and then just type function filename when you need it. [131009030010] |You can't do it with a quick substitution directly, because ^foo^bar is shorthand for: [131009030020] |The !! part (which refers to the last command) isn't part of the quick syntax (that's what makes it quick), but you can use the longer syntax directly and then modify the !! to whatever you want: [131009030030] |I explained as much of the history syntax as I know in this post; the last section includes the :s modifier [131009040010] |Manipulating a file with sed [131009040020] |I have a file called students.txt and it contains the following data in the format Surname, Forename: day.month.year: Degree: [131009040030] |I'm trying to return all lines in the format Forename,Surname: day.month.year, but without the MSc degree being studied. [131009040040] |So far I have: [131009040050] |What's wrong with that? [131009050010] |Rather than sed, it might be easier use awk with a field separator of ':', and just print the first two fields. [131009060010] |This should do it: [131009060020] |The first statement (separated by ;) searches for the Surname, delimited by comma-space, and Forename, delimited by colon, and swaps them, using a comma-space separator. [131009060030] |The second statement searches for the last colon and removes that and anything to the end of the line. [131009060040] |As someone mentioned this could be handled by awk. [131009060050] |Q.E.D [131009070010] |For sed you'll want three back references. [131009070020] |The first delimited by the comma and the second two delimited by the colon [131009070030] |However, when dealing with delimiter and fields, awk is really the tool to use because you can specify a field separator which can be a regex. [131009070040] |In this case our field separator is either a comma or colon folowed by a space. [131009080010] |Sniffing packets through router [131009080020] |I would like to create a system like this. [131009080030] |The user would connect through a wifi network which would reroute all http requests and responses through the network card on a computer thus allowing that computer to sniff the packets. [131009080040] |I have debian running on the computer. [131009080050] |How would i go about doing this? [131009090010] |Unless your brand of router specifically allows for that kind of interception (most don't unless you're talking about industrial grade stuff with triple digit costs and usage licenses) I'm afraid you're sunk; a better bet might be to install a wifi card in your computer and try to sniff the wireless traffic directly using something like wireshark or Kismet. [131009100010] |Where ethX the card which receives this traffic and x.x.x.x/y is the cidr of the wireless network. [131009100020] |This should capture anything coming or going to this network and save it to "capturefile" file. [131009100030] |Add and port 80 in the end if you want only web traffic. [131009100040] |This looks like a "honeypot" setup. [131009100050] |If you are trying to capture http sessions and/or other private information, this is illegal. [131009100060] |Even if those users are trying to steal your internet connection. [131009100070] |Unless you've acquired permission from those who use the wireless network, this is probably illegal. [131009100080] |EDIT: [131009100090] |I thought you already had setup the rerouting part. [131009100100] |If you're asking how to reroute the specific traffic, it is possible but it depends on the hardware you have and probably I can't help you on that. [131009110010] |You can connect modem directly to your computer and make router so send all traffic through your debian system. [131009110020] |In this case you may do anything you want with packets. [131009110030] |P.S.: do you need something like this? [131009120010] |Get a 10/100 ethernet hub, a real hub, like a Netgear DS104. [131009120020] |Put it between the wifi and the router. [131009120030] |Hubs replicate traffic on all ports, so you can connect a separate machine to another port on the hub and sniff everything. [131009130010] |How to upgrade Fedora Core 3? [131009130020] |I want to upgrade from fedora core 3 to latest release without using a cd. [131009130030] |I have internet connection. [131009130040] |Is it possible to use yum to upgrade the OS entirely to the latest version? [131009130050] |At present there are no .repo files in my /etc/yum.repos.d/ directory. [131009130060] |So I could not use the yum command. [131009130070] |How to get the necessary repository and to upgrade the OS? [131009140010] |I don't think using yum is feasible for such an early release of Fedora. [131009140020] |I seem to remember having trauma upgrading an FC4 system. [131009140030] |My best advice is to: [131009140040] |
  • Download and burn a DVD of the latest version of Fedora.
  • [131009140050] |
  • Backup any important user files as Faheem suggests.
  • [131009140060] |
  • Start the installation process (reboot from the DVD).
  • [131009140070] |
  • At the boot prompt use the 'upgrade' option.
  • [131009140080] |This will attempt to upgrade your system without affecting your user files. [131009140090] |If this fails, you'll need to do a fresh installation and re-install your backed up files. [131009140100] |For later versions of Fedora, using yum is much better supported: [131009140110] |This will download the correct versions of all the RPMs required and set everything up so that the system can upgrade itself when it reboots. [131009150010] |Flash plugin installation error [131009150020] |Somehow I found a repository for fedora core 3 flash plug-in. [131009150030] |When I tried to install the flash plug-in I have encountered this error. [131009150040] |I can't figure it out what does it mean, or how to solve this error. [131009150050] |Any little help is much appreciated. :) [131009160010] |This looking like a harmless warning. rpm did not find the gpg signing key, or something like that. [131009160020] |Is this the end of the output? [131009160030] |If so, you may need to configure yum to ignore the signing key issue. [131009170010] |per process swapiness for linux [131009170020] |/proc/sys/vm/swappiness is nice but i want a knob that's per process like /proc/$PID/oom_adj so that i can make certain processes less likely than others to have any of their pages swapped out unlike memlock this doesn't prevent a program from being swapped and like nice the user by default can't make their programs less likely but only more likely to get swapped i think i'd a call this /proc/$PID/swappiness_adj [131009180010] |You can configure swappiness per cgroup: [131009180020] |http://www.kernel.org/doc/Documentation/cgroups/cgroups.txt [131009180030] |http://www.kernel.org/doc/Documentation/cgroups/memory.txt [131009190010] |Problem pinging from a specific interface [131009190020] |I'm trying to ping from a specific interface, I have a wired and a wireless connection both going into my laptop. [131009190030] |My wired adaptor eth0 is on the IP 172.16.109.75 My wifi adaptor wlan0 is on the IP 192.168.1.69 [131009190040] |When I ping google with my eth0 unplugged with the following command: [131009190050] |It works fine as expected. [131009190060] |I plug my eth0 cable in and run the same again: [131009190070] |By the output at the top it seems to send it from the eth0 (which at work won't be able to ping because it gets blocked), but the wifi is another link to a separate network where I'm on the net directly and therefore sending the ping request from the wlan0 should work. [131009190080] |What's happening? [131009190090] |How should I fix it? [131009200010] |Probably, when pluging in the ethernet cable, your default route gateway changes by dhcp. [131009200020] |You send packets from wlan0 but your system doesn't know who is the gateway to forward them to. [131009200030] |This way you can only ping systems within 192.168.1 network but not further. [131009200040] |If you want to get a reply from the google server, you'll have to either change the default gateway back to the wireless router, or add a specific route for this server. [131009210010] |To ping from some interface you should use: [131009220010] |Split: how to split into different percentages ? [131009220020] |How can I split a text file into 70% and 30% using the split command ? [131009230010] |The commands below will work for percentages above 50% (if you want to split only into two files), quick and dirty approach. [131009230020] |1) split 70% based on lines [131009230030] |2) split 70% based on bytes [131009240010] |Display stuff below the prompt at a shell prompt? [131009240020] |Lets's say my prompt looks like this (the _ represents my cursor) [131009240030] |Is there any way I could make it look like this [131009240040] |The question was originally about zsh, but now has other answers. [131009250010] |The following settings seem to work. [131009250020] |The text on the second line disappears if the command line overflows the first line. [131009250030] |The preexec function erases the second line before running the command; if you want to keep it, change to preexec () { echo; }. [131009250040] |% escapes are documented in the zsh manual (man zshmisc). [131009250050] |Terminfo is a terminal access API. [131009250060] |Zsh has a terminfo module that gives access to the terminal description database: $terminfo[$cap] is the sequence of characters to send to exercise the terminal's capability $cap, i.e., to run its $cap command. [131009250070] |See man 5 terminfo (on Linux, the section number may vary on other unices) for more information. [131009250080] |The sequence of actions is: move the cursor down one line (cud1), then back up (cuu1); save the cursor position (sc); move the cursor down one line; print [some status]; restore the cursor position. [131009250090] |The down-and-up bit at the beginning is only necessary in case the prompt is on the bottom line of the screen. [131009250100] |The preexec line erases the second line (el) so that it doesn't get mixed up with output from the command. [131009250110] |If the text on the second line is wider than the terminal, the display may be garbled. [131009250120] |Use Ctrl+L in a pinch to repair. [131009260010] |Here is a bash equivalent of Gilles' zsh solution. [131009260020] |Bash doesn't have a native terminfo module, but the tput command (bundled with terminfo) does much the same thing. [131009260030] |If the terminal doesn't support one of the capabilities, it will fall back to a one-line prompt. [131009260040] |The trap line is a hacky way to emulate zsh's preexec function. [131009260050] |See http://superuser.com/questions/175799/ for more info. [131009260060] |EDIT: Improved script based on Gilles' comments. [131009270010] |listing packages in Debian, a la `dpkg -l`, but including the package origin/source [131009270020] |I want to list all packages of the form [131009270030] |but in addition to this output, I would like the origin/source (I'm not sure of the preferred term) of each package. [131009270040] |If the package doesn't correspond to any source, it should say unknown or similar. [131009270050] |Off the top of my head, the most promising approach would be to use dctrl-tools, but I'm not sure how to go about it. [131009270060] |For background, I was trying to debug a library mismatch with ffmpeg. [131009270070] |See Debian bug report - ffmpeg: backport of 4:0.6.1-5 from unstable produces WARNING: library configuration mismatch. [131009270080] |The bug report is no longer an issue, but I'm still interested in this question. [131009270090] |Just to be clear on the format, this should look something like [131009270100] |If the same package is available in multiple categories, ie. in both squeeze and testing, then the lowest / oldest category available should be used. [131009270110] |In this case, squeeze. [131009280010] |Dpkg doesn't track this information. [131009280020] |Where you got each .deb file is not its concern. [131009280030] |Apt doesn't track this information either, but it knows where you can now get the package, which is good enough. [131009280040] |As 9000 wrote in a comment, apt-cache policy '^libav' shows you what versions of packages with names matching the regexp ^libav are installed or available. [131009280050] |The output isn't particularly convenient to parse, but here's a minimally tested script that gives approximately the format you want: [131009280060] |Another way to the information you're asking for is with aptitude versions. [131009280070] |Again, the minimally-tested snippet below gives roughly the desired format. [131009280080] |The pattern "^libav" ~i matches packages that are installed and whose name matches the given regexp. [131009280090] |There's also a separately-packages utility apt-show-versions that, again, gives the information you want in roughly the format you're asking. [131009290010] |Multiline pattern match using sed, awk or grep [131009290020] |Is it possible to do a multiline pattern match using sed, awk or grep. [131009290030] |Take for example, I would like to get all the lines between { and } [131009290040] |So it should be able to match [131009290050] |
  • {}
  • [131009290060] |
  • {.....}
  • [131009290070] |
  • {..... .....}
  • [131009290080] |Initially the question used

    as an example. [131009290090] |Edited the question to use {,} [131009300010] |XML like expressions (infinintely recursive tags) is not a 'regular language' therefore cannot be parsed with regular expressions (regex). [131009300020] |Here's why: [131009300030] |http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/ [131009300040] |http://www.perlmonks.org/?node_id=668353 [131009300050] |http://stackoverflow.com/questions/1379524/textual-protocol-which-is-not-a-regular-language [131009310010] |Regular expressions cannot find matching nested parentheses. [131009310020] |If you are certain that there will be no pair of parentheses nested inside the one you are searching, you can search until the first closing one. [131009310030] |For example: [131009310040] |This will replace all the text from '{' to '}' with what's between them. [131009320010] |You can use the -M (multiline) option for pcregrep: [131009320020] |\s is whitespace (including newlines), so this matches zero or more occurrences of (whitespace followed by .* followed by whitespace), all enclosed in braces. [131009320030] |Update: [131009320040] |This should do the non-greedy matching: [131009330010] |While I agree with the advice above, that you'll want to get a parser for anything more than tiny or completely ad-hoc, it is (barely ;-) possible to match multi-line blocks between curly braces with sed. [131009330020] |Here's a debugging version of the sed code [131009330030] |Some notes, [131009330040] |

  • -n means 'no default print lines as processed'.
  • [131009330050] |
  • 'p' means now print the line.
  • [131009330060] |
  • The construct /[{]/,/[}]/ is a range expression. [131009330070] |It means scan until you find something that matches the first pattern (/[{]/) AND then scan until you find the 2nd pattern (/[}]/) THEN perform whatever actions you find in between the { } in the sed code. [131009330080] |In this case 'p' and the debugging code. (not explained here, use it, mod it or take it out as works best for you).
  • [131009330090] |You can remove the /[}]/a\ end of block debugging when you prove to your satisfaction that the code is really matching blocks delimited by {,}. [131009330100] |This code sample will skip over anything not inside a curly brace pair. [131009330110] |It will, as noted by others above, be easly confused if you have any extra {,} embedded in strings, reg-exps, etc., OR where the closing brace is the same line, (with thanks to fred.bear) [131009330120] |I hope this helps. [131009330130] |P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer [131009340010] |How to interpret traceroute information? [131009340020] |For example in this pic taken from Wikipedia [131009340030] |[Bigger Version] [131009340040] |
  • What are all these 12 server names that are listed? [131009340050] |I know they are servers that are supposed to be the path that the network packet had to travel to reach the destination. [131009340060] |But why do I need to go to any server except the server that actually serves the request by providing me the file I requested?
  • [131009340070] |
  • What are the three timing informations that are mentioned for each server name?
  • [131009340080] |Isn't the process supposed to be like this (copied from here)? [131009340090] |The browser communicated with a name server to translate the server name www.howstuffworks.com" into an IP Address, which it uses to connect to the server machine. [131009340100] |The browser then formed a connection to the server at that IP address on port 80. [131009340110] |(We'll discuss ports later in this article.) [131009340120] |Following the HTTP protocol, the browser sent a GET request to the server, asking for the file "http://www.howstuffworks.com/web-server.htm." [131009340130] |(Note that cookies may be sent from browser to server with the GET request -- see How Internet Cookies Work for details.) [131009340140] |The server then sent the HTML text for the Web page to the browser. [131009340150] |(Cookies may also be sent from server to browser in the header for the page.) [131009340160] |The browser read the HTML tags and formatted the page onto your screen. [131009350010] |The 12 lines are the nodes the packet had to go through to get to wikimedia's server. [131009350020] |You need to go through 12 nodes because your computer isn't connected directly into wikimedia's server. [131009350030] |It is connected to a node at xs4all.net (an ISP, naturally), which is connected to another xs4net node, which is connected to wvfiber.net, which is connected to as30217.net, which is connected to wikimedia.org, so that's the best path the routers could find to get the packet to the destination. [131009350040] |If your computer was plugged directly into the target computer there would be only one hop; try tracerouting to another computer on your local network: [131009350050] |The timing information on the right side of each hop is the round-trip time for the packet. [131009350060] |By default three packets are sent per hop, so it shows three timings; if you give traceroute the -q option you can control how many packets are sent: [131009360010] |The hosts shown by traceroute are not servers, they're routers. [131009360020] |Traceroute shows the hops on the IP route from the computer of the person who took this snapshot to the wikipedia server. [131009360030] |The description of HTTP that you quote looks at a much higher level where all this routing is transparent. [131009360040] |I think the best way of explaining this is through a metaphor. [131009360050] |HTTP (for example) requires a bidirectional communication channel between the client and the server; this channel is provided by TCP. [131009360060] |TCP is built in turn on top of IP. [131009360070] |The goal of IP is to transmit packets from one IP address to another. [131009360080] |An TCP connection requires IP packets going from the client to the server and IP packets going from the server to the client. [131009360090] |Ok, now think of each IP packet as a letter that you drop in a mail box and that the Post Office carries to its destination. [131009360100] |Traceroute shows all the stages on the journey of the letter from your dwelling to the recipient's dwelling: the mail box it's dropped in, the town post office, the district sorting office, the regional mail hub, etc., until the letter reaches the recipient's mail slot. [131009360110] |This is basically what you see when you watch the progress of a registered tracked parcel with DHL/UPS/... [131009360120] |In this example, the first two hops are called ….xs4all.net; they're clearly from the snapshot author's ISP. [131009360130] |The next few lines are from WV Fiber, which operates international transit lines. [131009360140] |I don't know who as30217.net is; probably an ISP for datacenters. [131009360150] |The final two machines are from Wikipedia. [131009360160] |IP routing is completely transparent to higher-level protocols such as TCP and a fortiori all protocols built over TCP. [131009360170] |In fact, traceroute has to play some tricks to obtain the information at all. [131009370010] |when does the system send a SIGTERM to a process? [131009370020] |My server program received a SIGTERM and stopped (with exit code 0). [131009370030] |I am surprised by this, as I am pretty sure that there was plenty of memory for it. [131009370040] |Under what conditions does linux (busybox) send a SIGTERM to a process? [131009380010] |kernel can generate SIGTERM when running low on disk space or there is a hardware interrupt...caused by some error [131009390010] |Are you sure it exited on SIGTERM? [131009390020] |The kernel nor busybox would never generate this normally. [131009390030] |If the program actually exited on a signal, it would not have an exit code unless you caught the signal and did a normal exit. [131009390040] |You mentioned working with serial ports and sockets, it is possibly that it's a SIGPIPE that's killing it? [131009390050] |Or possibly a SIGINT due to receiving a Control-C over the serial port? [131009400010] |I'll post this as an answer so that there's some kind of resolution if this turns out to be the issue. [131009400020] |An exit status of 0 means a normal exit from a successful program. [131009400030] |An exiting program can choose any integer between 0 and 255 as its exit status. [131009400040] |Conventionally, programs use small values. [131009400050] |Values 126 and above are used by the shell to report special conditions, so it's best to avoid them. [131009400060] |At the C API level, programs report a 16-bit status¹ that encodes both the program's exit status and the signal that killed it, if any. [131009400070] |In the shell, a command's exit status (saved in $?) conflates the actual exit status of the program and the signal value: if a program is killed by a signal, $? is set to a value greater than 128 (on every unix I know, this value is 128 plus the signal number). [131009400080] |In particular, if $? is 0, your program exited normally. [131009400090] |¹ roughly speaking [131009410010] |Configuring cron.hourly [131009410020] |Hi, [131009410030] |I am unable to configure a cron job to run by placing it in /etc/cron.hourly folder. [131009410040] |The file under cron.hourly is : [131009410050] |Permissions on the file : [131009410060] |There seems to be no errors reported in the var/log/cron logfile. [131009410070] |No mention of the script is done. :( [131009420010] |In order to isolate the problem, move /usr/local/xxxx/check-interface.bash to /etc/cron.hourly/check , and then see if it runs. [131009420020] |If the script does run, then the problem is caused by an ownership/permissions or related issue which is preventing cron from executing scripts at /usr/local/xxxx/*. [131009420030] |If the script does not run, then the problem is most likely with your script itself. [131009420040] |As another sanity check, replace the contents of /usr/local/xxxx/check-interface.bash with something dead simple, like: [131009420050] |And then see if /tmp/check-interfaces.log is actually being populated by your cronjob. [131009420060] |If it does work, then the problem must be with your original script. [131009430010] |ssh remote server on some port other than 22 without password [131009430020] |I am usually connecting to the remote server with [131009430030] |ssh user@server.com -p 11000 [131009430040] |and then giving the password each time for user. [131009430050] |How should I avoid entering the password each time I connect using ssh ? [131009440010] |Turn to key-based authentication. [131009450010] |I think I should add a bit of a narrative to the link in 9000's answer. [131009450020] |First, check in home to see if you have a folder named .ssh with files inside. [131009450030] |If .ssh doesn't exist, you don't have any key set up, so you have to generate a pair using ssh-keygen (it defaults to create the keys in ~/.ssh, you can give the keys a password or not). [131009450040] |This will give you a .pub file in ~/.ssh (mine is id_rsa.pub). [131009450050] |The file id_rsa.pub should look like: [131009450060] |ssh-rsa lotsofrandomtext user@local [131009450070] |Second, ssh to the server, check to see if you have a file named ~/.ssh/authorized_keys, create it if it doesn't exist. [131009450080] |Then, append the contents of the ~/.ssh/id_rsa.pub that you generated earlier in your local machine here. [131009450090] |This might mean copying the file contents to your clipboard, then opening ~/.ssh/authorized_keys in a text editor and pasting the lines in. [131009450100] |You should now be able to ssh without typing the password. [131009450110] |Third, if you put this in ~/.ssh/config: [131009450120] |You will be able to ssh server (save quite a few key strokes). [131009450130] |Thanks to cjm for the User keyword that I forgot to mention. [131009460010] |As a supplement to phunehehe's answer, see the Gentoo Linux Keychain Guide for a guide to keychain. keychain also uses ssh-agent. [131009460020] |The ssh-agent daemon makes the passphrase available (it becomes unavailable when the ssh-agent daemon dies) but keychain reuses an ssh-agent between logins, and optionally prompts for passphrases each time the user logs in, to quote the guide. [131009470010] |"Conflicting versions" error while running eggdrop [131009470020] |I'm getting this error while running the eggdrop bot, while loading the http.tcl script: [131009470030] |conflicting versions provided for package "http": 2.7.5, then 2.5.2 [131009470040] |How do I fix this? [131009480010] |How to utilize TUN/TAP tunnel from user program? [131009480020] |I recently discovered the existence of Linux TUN/TAP interfaces and am still trying to understand them. [131009480030] |I think I get the basic concept - pseudo devices are created which emulate a network interface and instead of passing data to hardware it is passed to a userspace program. [131009480040] |How would you direct an unrelated program to utilize this tunnel? [131009480050] |For example, before the tunnel is created my system only contains eth0 and lo, the normal ethernet interface (wired to my local network) and the loopback interface. [131009480060] |After a program creates and configures a tunnel, I have a new interface gr0 which I gave an IP address that is on my local network, but not in use (so we are all on the same subnet). [131009480070] |How would I make an unrelated program utilize this 'tunnel'? [131009480080] |Say I had a simple Python message passing client/server app which utilizes a TCP connection, how could I configure it to use the tunnel? [131009480090] |I apologize if I am missing something basic, but as usual I have managed to confuse myself in the scheme of things. [131009480100] |Again, all I want is to have a simple TCP program utilize this tunnel. [131009480110] |Thanks! [131009490010] |It's not always "tunnel". [131009490020] |TUN/TAP is just specific NIC drivers. [131009490030] |From point of view of network stack they acts as any other network interfaces: they can have IP addresses, can be point-to-point or broadcast interfaces. [131009490040] |Routing rules also applies to them. [131009490050] |But all traffic that gets written to one of that network interfaces goes to some userspace program for processing, and all data written by userspace program directly to /dev/tunX looks like incoming packets for network stack. [131009490060] |In usual tunneling setup server and client have TUN devices with assigned addresses. [131009490070] |Routing tables configured on both of them directs needed traffic to this TUN devices. [131009490080] |When packet get routed to tun0, kernel sends it to userspace program (client) that sends this packet to other program on remote machine (server) via, for example, TCP connection. On remote machine other program (server) recieves packet from client and writes it to it's own /dev/tunX device, "injecting" that packet into network stack. [131009490090] |And tunneled packet gets processed as any other. [131009500010] |How can I make iconv replace the input file with the converted output? [131009500020] |I have a bash script which enumerates through every *.php file in a directory and applies iconv to it. [131009500030] |This gets output in STDOUT. [131009500040] |Since adding the -o parameter ( in my experience ) actually writes a blank file probably before the conversion takes place, how can I adjust my script so it does the conversion, then overwrites the input file? [131009510010] |This isn't working because iconv first creates the output file (since the file already exists, it truncates it), then starts reading its input file (which is now empty). [131009510020] |Most programs behave this way. [131009510030] |Create a new, temporary file for the output, then move it into place. [131009510040] |Colin Watson's sponge utility (included in Joey Hess's moreutils) automates this: [131009520010] |How to make USB debian squeeze disk for PPC? [131009520020] |I'm using a G4 1.5ghz PPC 32bit Powerbook [131009520030] |I followed the instructions on this site. [131009520040] |http://mintppc.org/content/installation-mintppc-92 [131009520050] |But could never get my powerbook to boot up from the usb drive. [131009520060] |I can not use a CD because my DVD drive is busted. [131009520070] |Thus the need for USB. [131009520080] |Also most of the usb partitioning and install instructions are for people using linux already. [131009520090] |I need to create the disk in mac osx only. [131009530010] |I managed to do it a while back with Debian, here you can download it: [131009530020] |ISOs [131009530030] |and here is the installation manual. [131009530040] |Manual [131009530050] |Notice that some Macs wont boot from an USB device. check your models manual. [131009530060] |Good luck! [131009540010] |How do I split a flac with a cue? [131009540020] |I've got a full album flac, and a cue file for it. [131009540030] |How can I split this into a flac per track. [131009540040] |I'm a KDE user, so I would prefer a KDE/QT way, I would like to see command line and other gui answer's as well, but they are not my preferred method. [131009550010] |I only know a CLI way. [131009550020] |You will need cuetools and shntool. [131009560010] |*nix whose package manager DOES NOT split Python into multiple packages [131009560020] |Is there a *nix whose package manager doesn't split Python into multiple packages (typically something like python and python-devel). [131009560030] |I'd really like to just get the entire standard library when I install it, since that's how it's designed to work. [131009560040] |Thanks. [131009560050] |UPDATE: Some people are wondering what I mean, so here's an example: https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/123755. [131009560060] |I realize it's not caused (in this instance) by a python/python-devel split, but it's the sort of issue I don't want to worry about. [131009560070] |I just want to install the entirety of Python—with no weird tweaks of the ImportError handler, or std lib modules ripped out (for any reason)—and then let my package manager handle security updates, etc. [131009570010] |Well, there's Gentoo. [131009570020] |Since it installs everything from source, there are no -dev packages. [131009580010] |On Debian (and therefore probably Ubuntu), running apt-get install python installs python-minimal and python, which results in all core modules being installed, which I assume is what you mean by "the entire standard library". [131009580020] |The only caveat I can find is that the Tk GUI stuff is all installed, but you need to install the python-tk package to use it properly. [131009580030] |So run apt-get install python python-tk, and you have everything you need. [131009580040] |Does that meet your criteria? [131009590010] |Arch Linux doesn't have separate packages for -dev and it's binary (unlike gentoo). [131009590020] |There might be a few things, like tk which isn't pulled in by default. [131009590030] |Here's the python package for arch. [131009600010] |How to change the order of the network cards (eth1 <-> eth0) on linux. [131009600020] |Is there any way to swap network interfaces (eth1 <-> eth0) after system installation. [131009600030] |My brand new Debian 6.0 install assigned PCI network card as "eth0" and motherboards integrated network device as "eth1" by default. [131009600040] |The problem is I want to use the integrated device as default (eth0) network interface. [131009600050] |I already edited : [131009600060] |/etc/udev/rules.d/70-persistent-net.rules [131009600070] |to swap the names and everything seems to be ok and network is working but programs are still trying to use the PCI network card (which is now "eth1") as the default interface. [131009600080] |For example iftop now tries to use "eth1" as default device as it used "eth0" before the swap. [131009600090] |Is this purely a software problem as the applications are trying to use the first found device as a default device despite their interface naming or is there any way to fix this by configuring OS? [131009600100] |edit: I wrote a small app to print out iflist and the PCI device (eth1) came up before "eth0". [131009600110] |Any ideas how to swap the device order. [131009600120] |edit: I found a thread about the same problem and I tried everything they suggested and none of the solutions are working except for swapping the names "virtually". [131009610010] |You can use the netdev= kernel command line parameter (you need to pass that to the kernel in grub) to instruct the kernel to link a given irq to a given interface, e.g.: netdev=irq=2,name=eth0 [131009620010] |You are likely going to have to go into each affected programs configuration files and change 'eth1' to 'eth0.' [131009620020] |Such programs defaults are setup when they are installed or first run with the currently detected NICs. [131009620030] |I use Linux as a router, and had this issue when using scripts. [131009620040] |I now have a nice script fragment called netconf that I source in for any other script whenever i need to use NIC names, this file gives me a central location to specify them (i.e. LAN_IFACE=eth0, WAN_IFACE=eth1, etc.) [131009630010] |You can't change which interface is used by default in applications like iftop. [131009630020] |They call the C library function if_nameindex and use the first element in the returned array by default. [131009630030] |GNU libc's if_nameindex on Linux is a thin wrapper around the SIOCGIFCONF ioctl. [131009630040] |That returns interfaces in a fixed order, based on the order in which the network drivers were initialized and the order in which each driver detected each device. [131009630050] |If you really don't want to have to pass -i to iftop and similar programs, you can make a small wrapper around if_nameindex that reorders the elements in the returned list, with LD_PRELOAD. [131009630060] |I would call that a lot more trouble than it's worth. [131009640010] |I am answering to my own question now because I finally found a workaround for this problem. [131009640020] |I found out that it is possible to reorder the devices by unloading the drivers and then loading them in correct order. [131009640030] |

    First method (bruteforce):

    [131009640040] |So the first method I came up with was simple to bruteforce the driver reload with init.d script. [131009640050] |Following init script is tailored for Debian 6.0 but the same principle should work on almost any distribution using proper init.d scripts. [131009640060] |Then the script must be added to proper runlevel directory. [131009640070] |This can be done easily on Debian with "update-rc.d" command. [131009640080] |For example: update-rc.d reorder-nics start S [131009640090] |

    Second method (Better I think):

    [131009640100] |I also found a bit more elegant way (at least for Debian &Ubuntu systems). [131009640110] |First Make sure that kernel doesnt automatically load the NIC drivers. [131009640120] |This can be done by creating a blacklist file in /etc/modprobe.d/. [131009640130] |I created a file named "disable-nics.conf". [131009640140] |Note that files in /etc/modprobe.d/ must have .conf suffix. [131009640150] |Also naming modules in /etc/modprobe.d/blacklist.conf do not affect autoloading of modules by the kernel so you have to make your own file. [131009640160] |Then run 'depmod -ae' as root [131009640170] |Recreate your initrd with 'update-initramfs -u' [131009640180] |And finally add the driver names in corrected order into /etc/modules file. [131009640190] |Changes should come in effect after the next boot. [131009640200] |Reboot is not necessary though, it's easy to switch the devices with following command (as root ofcourse): [131009640210] |Some usefull links I found while searching the solution: [131009640220] |
  • http://www.macfreek.nl/mindmaster/Logical_Interface_Names
  • [131009640230] |
  • http://wiki.debian.org/KernelModuleBlacklisting
  • [131009640240] |
  • http://www.science.uva.nl/research/air/wiki/LogicalInterfaceNames
  • [131009650010] |Solaris 10 boot from cd-rom [131009650020] |Possible Duplicate: Solaris 10 boot from cd-rom [131009650030] |Hi I am trying to start up my solaris server using the newly downloaded iso written on a CD. [131009650040] |the iso name is : sol-10-u9-ga-sparc-dvd.iso [131009650050] |When we started the server; [131009650060] |logging in as root then tried to boot the system from CD-Rom [131009650070] |but the output is : [131009650080] |ok here is the scenario outputs : [131009650090] |after this next enter brings us a command prompt for sc> [131009650100] |but no boot command recognized here.. [131009650110] |do you have any idea about this issue ? [131009660010] |subversion 1.6 file svn:externals help [131009660020] |Hi, [131009660030] |Can someone please point out how I could do svn:externals for a single file? [131009660040] |I know how to use svn:externals for directories, having used it for a while now under svn 1.4 but I can't seem to figure out how to do file externals in 1.6. [131009660050] |I can't also figure out the documentation in the svn redbook here http://svnbook.red-bean.com/nightly/en/svn.advanced.externals.html. [131009660060] |Thank you so much in advance. [131009660070] |Ismael :) [131009670010] |You include the relative path to the imported copy of the file followed by the absolute path to the external copy of the file on the same line in the svn:externals property of a parent folder to the imported copy. [131009670020] |The external file must be in the same repository an it's imported copy. [131009670030] |Use svn propedit svn:externals . and add a line like: [131009670040] |path/to/imported/file.txt http://www.example.org/svn-repo/external/file.txt [131009670050] |Just make sure it's in the same repository. [131009670060] |Also read: [131009670070] |http://subversion.apache.org/docs/release-notes/1.6.html#file-externals [131009680010] |How to find unmatched brackets in a text file? [131009680020] |Today I learned that I can use perl -c filename to find unmatched curly brackets {} in arbitrary files, not necessarily Perl scripts. [131009680030] |The problem is, it doesn't work with other types of brackets () [] and maybe <>. [131009680040] |I also had experiments with several Vim plugins that claims to help finding unmatched brackets but so far not so good. [131009680050] |I have a text file with quite a few brackets and one of them is missing! [131009680060] |Is there any program / script / vim plugin / whatever that can help me identify the unmatched bracket? [131009690010] |In Vim you can use [ and ] to quickly travel to nearest unmatched bracket of the type entered in the next keystroke. [131009690020] |So [{ will take you back up to the nearest unmatched "{"; ]) would take you ahead to the nearest unmatched ")", and so on. [131009700010] |The best option is vim/gvim as identified by Shadur, but if you want a script, you can check my answer to a similar question on Stack Overflow. [131009700020] |I repeat my whole answer here: [131009700030] |If what you are trying to do applies to a general purpose language, then this is a non-trivial problem. [131009700040] |To start with you will have to worry about comments and strings. [131009700050] |If you want to check this on a programming language that uses regular expressions, this will make your quest harder again. [131009700060] |So before I can come in and give you any advice on your question I need to know the limits of your problem area. [131009700070] |If you can guarantee that there are no strings, no comments and no regular expressions to worry about - or more generically nowhere in the code that brackets can possibly be used other than for the uses for which you are checking that they are balanced - this will make life a lot simpler. [131009700080] |Knowing the language that you want to check would be helpful. [131009700090] |If I take the hypothesis that there is no noise, i.e. that all brackets are useful brackets, my strategy would be iterative: [131009700100] |I would simply look for and remove all inner bracket pairs: those that contain no brackets inside. [131009700110] |This is best done by collapsing all lines to a single long line (and find a mechanism to to add line references, should you need to get that information out). [131009700120] |In this case the search and replace is pretty simple: [131009700130] |It requires an array: [131009700140] |And a loop through those elements: [131009700150] |My test file is as follows: [131009700160] |My full script (without line referencing) is as follows: [131009700170] |The output of that script stops on the innermost illegal uses of brackets. [131009700180] |But beware: 1/ this script will not work with brackets in comments, regular expressions or strings, 2/ it does not report where in the original file the problem is located, 3/ although it will remove all balanced pairs it stops at the innermost error conditions and keeps all englobbing brackets. [131009700190] |Point 3/ is probably an exploitable result, though I'm not sure of the reporting mechanism you had in mind. [131009700200] |Point 2/ is relatively easy to implement but takes more than a few minutes work to produce, so I'll leave it up to you to figure out. [131009700210] |Point 1/ is the tricky one because you enter a whole new realm of competing sometimes nested beginnings and endings, or special quoting rules for special characters... [131009710010] |Update 2: The following script now prints out the line number and column of a mismached bracket. [131009710020] |It processes one bracket type per scan (ie. '[]' '<>' '{}' '()' ...) The script identifies the first ,unmatched right bracket, or the first of any un-paired left bracket... [131009710030] |On detecting an erroe, it exits with the line and column numbers [131009710040] |Here is some sample output... [131009710050] |Here is the script... [131009720010] |What encoding does my Konsole support? [131009720020] |How can I check what encoding (ASCII, UTF-8, UTF-16 etc) my Terminal uses? [131009720030] |It shouldn't matter, but I am using Konsole on KDE3.5. [131009730010] |From a Unicode and UTF-8 FAQ for UNIX/Linux: You can get a list of all locales installed on your system (usually in /usr/lib/locale/) with the command locale -a. [131009730020] |Set the environment variable LANG to the name of your preferred locale. [131009730030] |Konsole supports whatever your system supports - the important thing is that you export LANG in your shell so that the programs you run display their characters using the correct locale. [131009740010] |How do I give all the permissions to a file for a single user that's not me? [131009740020] |When I want to grant access to another user to my file, I use chmod 777 file, but if I want to be sure I'm granting permission just for that user, how can I do it? [131009740030] |-- update [131009740040] |The file is owned by "root", so it's mine if I access it with sudo, I suppose (or maybe I'm confused.. please correct me). [131009740050] |I want to share a folder called /Data in the root. [131009740060] |The other user I want to share it is the root of an embedded system, which I'm accessing with telnet and NFS. [131009740070] |The files inside /Data are generated by me, and every time I generate them, I have to use the command chmod 777 /Data so I can access them from the embedded system. [131009740080] |I'm using Ubuntu in my computer, and a compiled-here-linux in the embedded system. [131009750010] |You need to find a group that only you and that user is part of, and give correct permission to the group, not the world. [131009750020] |Could be easier with access control lists, if available. [131009760010] |If you own the file, setfacl -m u:otheruser:rwx filename [131009760020] |If not, or if your filesystem doesn't support extended acls, I'm afraid you're out of luck. [131009770010] |Ignore whitespaces changes in all git commands [131009770020] |I've found tons of sites that explain how to have git warn you when you're changing line endings, or miscellaneous other techniques to prevent you from messing up an entire file. [131009770030] |Assume it's too late for that -- the tree already has commits that toggle the line endings of files, so git diff shows the subtraction of the old file followed by the addition of a new file with the same content [131009770040] |I'm looking for a git configuration option or command-line flag that tells diff to just ignore those -- if two lines differ only by whitespace, pretend they're the same. [131009770050] |I need this config option/flag to work for anything that relies on file differences -- diff, blame, even merge/rebase ideally -- I want git to completely ignore trailing whitespace, particularly line endings. [131009770060] |How can I do that? [131009780010] |For diff, there's git diff --ignore-space-at-eol, which should be good enough. [131009780020] |For diff and blame, you can ignore all whitespace changes with -w: git diff -w, git blame -w. [131009780030] |For git apply and git rebase, the documentation mentions --ignore-whitespace. [131009780040] |For merge, it looks like you need to use an external merge tool. [131009780050] |You can use this wrapper script (untested), where favorite-mergetool is your favorite merge tool; run git -c mergetool.nocr.cmd=/path/to/wrapper/script merge. [131009780060] |The result of the merge will be in unix format; if you prefer another format, convert everything to that different format, or convert $MERGED after the merge. [131009780070] |To minimize trouble with mixed line endings, make sure text files are declared as such. [131009780080] |See also Is it possible for git-merge to ignore line-ending differences? on Stack Overflow. [131009790010] |How to determine distribution from command line? [131009790020] |Possible Duplicate: Bash: Get Distribution Name and Version Number [131009790030] |Given (root) access to a machine with Linux through the command line(over ssh), how can I determine which distribution is actually running on the system? [131009800010] |How on earth can I stop this CUPS-related message on my Debian 6 virtual machine? [131009800020] |I’ve got a Debian 6 VMWare virtual machine that I mostly use via SSH (but occasionally via the GUI). [131009800030] |Every few minutes, the following message gets printed in my terminal: [131009800040] |For a while, I ignored it. [131009800050] |Then, I uninstalled CUPS: [131009800060] |and restarted. [131009800070] |I’m still getting the message. [131009800080] |How on earth can I stop it? [131009800090] |I’m never, ever going to want to print from this machine. [131009810010] |a) Removing cups doesn't actually remove CUPS. [131009810020] |b) you want to use apt-get purge not remove, probably. [131009810030] |You want to purge this lot, at a minimum. [131009810040] |You can do it with a wildcard or regex. [131009810050] |seems to delete all packages beginning with cups, but really we want all files containing cups. [131009810060] |Regex experts to the rescue, please. :-) See apt-get purge '^cups' output below. [131009810070] |(UPDATE: Per Paul's comments below, GNOME apparently insists on having some CUPs libraries installed, so I suggested he just remove frontend stuff, so in fact maybe he really does want apt-get purge '^cups'). [131009820010] |Ah — turns out I think this was actually a VMWare issue after all. [131009820020] |I disabled printers in VMWare’s virtual machine’s settings, and lo and behold, the problem (seems to have) disappeared. [131009820030] |VMWare must have been trying to get printing to work. [131009830010] |Message while booting : "pci 0000:00:00.0: BAR 0: can't allocate mem resource [0xc0000000-0xbfffffff]" [131009830020] |I get the following message on the console every time the Linux kernel boots: [131009830030] |pci 0000:00:00.0: BAR 0: can't allocate mem resource [0xc0000000-0xbfffffff] [131009830040] |Is this an error message? [131009830050] |What causes this message? [131009830060] |I am using Linux 2.6 kernel running on PowerPC (P2020) [131009830070] |Updating the question with the output of lspci and content of /proc/iomem [131009830080] |lspci [131009830090] |0000:00:00.0 Class 0604: Unknown device 1957:0070 (rev 20) [131009830100] |0000:01:00.0 Class 0200: Unknown device 14e4:1692 (rev 01) [131009830110] |0001:02:00.0 Class 0604: Unknown device 1957:0070 (rev 20) [131009830120] |0001:03:00.0 Class 0200: Unknown device 14e4:1692 (rev 01) [131009830130] |/proc/iomem [131009830140] |a0000000-bfffffff : /pcie@ffe09000 [131009830150] |a0000000-bfffffff : PCI Bus 0000:01 [131009830160] |a0000000-a000ffff : 0000:01:00.0 [131009830170] |a0000000-a000ffff : tg3 [131009830180] |c0000000-dfffffff : /pcie@ffe0a000 [131009830190] |c0000000-dfffffff : PCI Bus 0001:03 [131009830200] |c0000000-c000ffff : 0001:03:00.0 [131009830210] |c0000000-c000ffff : tg3 [131009830220] |ffe04500-ffe04507 : serial [131009830230] |ffe04600-ffe04607 : serial [131009830240] |ffe07000-ffe07fff : spi [131009830250] |ffe24000-ffe24fff : ethernet [131009830260] |ffe24520-ffe2453f : mdio [131009830270] |ffe25000-ffe25fff : ethernet [131009840010] |How to run a script from another path, and know the script's path? [131009840020] |I need to run a script that access a file in the same path it is located. [131009840030] |For example: I have the script in /home/me/folder/script.sh, and this script will access the file /home/me/folder/myfile. [131009840040] |As the file is in the same path of the script, I just use the name myfile, not the full path. [131009840050] |But if I run it from /tmp/, like: [131009840060] |it gives me: [131009840070] |So, how can I say in the script that the file is in the same path as the script itself? [131009850010] |How about $(dirname "$0")/myfile? [131009860010] |You can't know in all circumstances, but often (not always) when you run a script by path instead of letting the shell search $PATH, the path to the script will be in $0. [131009860020] |You can then use dirname "$0" or ${0%/*} to get the path. [131009870010] |I would normally do something like this at the head of my script: [131009870020] |Or, to get the full pathname to the script's directory (instead of a relative pathname): [131009870030] |Then just reference ${SCRIPTDIR} where needed. [131009880010] |Setting up a family server. [131009880020] |I'm very new to Unix, but after having become comfortable with bash over the past year and after having played with Ubuntu recently, I've decided to make my next computer run Ubuntu, and I think my wife is on board for her next computer as well. [131009880030] |Is it easy to set up a central family server so that each computer acts as a client for the information that is stored only in a single place? [131009880040] |What are the options? [131009880050] |Are there any online how-to documents for this? [131009890010] |Install Samba and create network Samba shares on your primary Ubuntu server so you can connect all your Ubuntu and Windows PCs to the same network folder. [131009890020] |See documentation here. [131009900010] |You can use Fish or SFTP to transfer files between computers, with minimal prior setup. [131009900020] |Both protocols transfer files over SSH, which is secure and encrypted. [131009900030] |They are very well integrated into KDE: you can type fish:// or sftp:// URLs into Dolphin's Location Bar, or you can use the "Add Network Folder" wizard. [131009900040] |SFTP at least seems to be supported by Gnome too. [131009900050] |I personally use Fish. [131009900060] |On the server machine Fish and SFTP need only an SSH server running, that you can also use to administrate the server machine. [131009900070] |Everyone who wants to access the server over Fish or SFTP needs a user account on the server. [131009900080] |The usual file access permissions apply, for files accessed over the network. [131009900090] |Fish and SFTP are roughly equivalent to shared directories on Windows, but both work over the Internet too. [131009900100] |Usual (command line) programs however can't see the remote files, only programs that use the file access libraries of either Gnome or KDE can see them. [131009900110] |To access the remote files through scripts, KDE has the kioclient program. [131009900120] |- [131009900130] |For a setup with a central server that serves both user identities and files look at NIS and NFS. [131009900140] |Both are quite easy to set up, especially with the graphical installers from Opensuse. [131009900150] |This is the setup where every user can work at any machine and find his/her personal environment. [131009900160] |However the client machines become unusable when they can't access the server. [131009900170] |Furthermore a simple NFS installation has very big security holes. [131009900180] |The local computers, where the users sit, have to handle the access rights. [131009900190] |The NFS server trusts any computer that has the right IP address. [131009900200] |A smart 12 year old kid with a laptop can get access to every file, by replacing one of the local machines with the laptop and recreating the NFS client setup (which is easy). [131009900210] |Edit: [131009900220] |Off course there is Samba, which has already been mentioned by Grokus. [131009900230] |It seems to be quite universal: It can serve files, printers, and login information. [131009900240] |It is compatible with Windows and Linux; there is really a PAM Module (Winbind) that lets Linux use the login information form a Samba or Windows server. [131009900250] |Samba (and Windows) does not have the security problems of NFS, it handles user identification and access rights in the server. [131009900260] |(Please note: I did never administrate or install a Samba server.) [131009900270] |My conclusion: Fish or SFTP are IMHO best for usage at home. [131009900280] |Use Samba if you have Windows clients too. [131009900290] |NFS is only useful if you can trust everybody, but I expect it to create the lowest CPU load. [131009910010] |If you also want access when you're away from home, I would consider using Dropbox or Ubuntu One for synchronized off-site storage and skip having your own server. [131009920010] |An of out of the box option is to re-purpose your current boxes as terminals (using something like ThinStation) and set them all up to auto log into a beefy new Ubuntu box. [131009920020] |You could use DynDNS to keep an external name resolving and access the same system from work (assuming you aren't in a proxy black hole). [131009920030] |That would keep all of your files in a single location, and let you all share the same environment. [131009920040] |That said, this is not for the faint of heart. [131009920050] |Exposing your Linux server to the outside world is somewhat risky. [131009920060] |You would also all be using the same box, so if it failed you would all be out of luck. [131009930010] |A central server to host network shares is a good idea, a simple rsync script can ensure certain files stay synchronized on your local PC, if needed. [131009930020] |The server can also double as a backup location for your important documents, which in turn get backed-up by the server onto an external or online. [131009930030] |I haven't used this, but Amahi Home Server could be a good place to look. [131009940010] |Diskless workstations [131009940020] |For various reasons I need a setup with one server, and two diskless workstations. [131009940030] |The workstations are to be "fat clients", which means I want to enable them to use their own CPU, memory, etc, for everything. [131009940040] |Ideally, the workstation users should not have to notice that they are running diskless at all (except for the PXE booting, obviously...). [131009940050] |The workstations should run OpenSuse (some version between and including 11.2 and 11.4) since that is what we use. [131009940060] |They don't necessarily have to run a vanilla openSuse install, but as close as possible. [131009940070] |The general idea is to PXE boot the workstations, and then let them mount their (root) filesystems on the server via NFS. [131009940080] |I tried simply copying an existing OpenSuse 11.4 installation to a directory which I then exported via NFS. [131009940090] |The kernel and initrd were then exposed via PXE/TFTP. [131009940100] |The problem is that the initrd from the install is tailored to the machine it was installed on, so using it as is did not work. [131009940110] |I have made some attempts to use LTSP (KIWI-LTSP for OpenSuse) with very limited success. [131009940120] |So, now to my actual question(s): [131009940130] |1) Apart from modifying the initrd by hand to work with the diskless workstations, is there anything else I could use to aid me? [131009940140] |2) One idea I had was to use the same root ("/") for both workstations, and then mount stuff like /var and /tmp as tmpfs. [131009940150] |Are there any pitfalls to avoid here? [131009940160] |3) Any other ideas on how to accomplish this setup? [131009940170] |All ideas are very welcome! [131009950010] |I can not give you a specific answer for opensuse, but there should be a similar process for most distros. [131009950020] |In the debian way (without going too deep in details) [131009950030] |
  • Initrd images are built with update-initramfs (from initramfs-tools package). [131009950040] |
  • Most of the times initrd images are good for any normal system, since the boxes are diskless, the only thing that absolutely needs to work during initrd stage is the network and nfs. [131009950050] |If some system needs certain modules to use their NIC, they must be specified in /etc/initramfs-tools/modules before running update-initrams.
  • [131009950060] |
  • The default image has nfs support and the only thing needed is to add root=/dev/nfs nfsroot=x.x.x.x:/exportedfs dhcp in your pxe config's APPEND line.
  • [131009950070] |
  • In the server side, the exported directory should be a working jailed environment, that would make very easy to upgrade or install software or change configuration. [131009950080] |In the client side, /tmp can simply be a tmpfs. [131009950090] |For /var there's a couple of options. [131009950100] |
  • Either mount some subdirectories as tmpfs, like /var/run, /var/log /var/tmp etc.
  • [131009950110] |
  • Or mount a tmpfs somewhere and then use unionfs/aufs to merge it with /var. [131009950120] |This way, the system will be able to write or change any file under /var but it won't be persistent.
  • [131009950130] |You will probably need aufs for /etc/ as well, you'll need a script that will run early in the boot process and setup some per client stuff, like getting the hostname based on the ip and recreating /etc/hosts and /etc/hostname. [131009950140] |
  • /home folder should also be nfs exported and mounted rw. [131009950150] |NIS or LDAP or AD or something similar to manage user accounts (and/or configuration files), will help you keep the mess a bit.
  • [131009950160] |I should note that I never did aufs over nfs, in the past I had some problems with unionfs over network shares. [131009950170] |Some of the stuff above is theoretical and it was very long ago when I had a few diskless systems of my own. [131009960010] |performance monitoring [131009960020] |Is there some performace monitoring tool which would run in background gathering info about all system activity? [131009960030] |Somethimes my system (Arch linux, 32 bit) slows down terribly and the top utility doesn't show anything. [131009960040] |I imagine some daemon which would gather info and log it, so when the slowdown pass away I would be able to find what was the problem. [131009970010] |The slowdown might not have been caused by CPUs being utilized. [131009970020] |Check iotop for IO utilization. [131009980010] |How about sar? [131009990010] |Consider installing munin. [131009990020] |It will monitor a wide variety of data and provide graphical output. [131009990030] |This is better for monitoring trends. [131009990040] |You may also want to consider run sar in the background. [131009990050] |It can identify a number of issues including CPU, I/O, swap, and other issues. [131009990060] |If you are experiencing problems, this may be your best for your current situation. [131010000010] |There are a lot of them. [131010000020] |If you want it to be basic and command-line, take a loot to sar. [131010000030] |Or you could use some monitoring tool with nice web ui. [131010000040] |Personally I prefer zabbix, there are also monitorix (very simple to setup), nagios, zenoss and many others. [131010000050] |Monitorix is probably what you want on this point. [131010010010] |I shouldn't use root on my new ubuntu cloud instance, right? [131010010020] |I am playing with ubuntu on my new Rackspace cloud instance. [131010010030] |However, the information they give me is for root access -- that doesn't seem like a best practice for doing development on this thing. [131010010040] |What is the best practice for setting up a cloud instance for development? [131010010050] |Should I create another user that allows me to install my rails [131010020010] |Generally it's best to use the least privileged user that can get the job done. [131010020020] |Also, it's inevitable that you have to use the root account some time (even using sudo, which Ubuntu embraces, still counts as using root privileges). [131010020030] |There is no "you shouldn't use root", just "you shouldn't use root for normal tasks". [131010020040] |For software development you should definitely create a user account for your own and use that for everyday tasks. [131010020050] |Set yourself up as a sudoer, and disable the root password if you like. [131010020060] |You probably have to use root access to set up your development environment, so be prepared to go sudo apt-get install thingy. [131010020070] |Final words, use root when you have to, but don't feel bad about it. [131010020080] |It actually feels quite good :) [131010030010] |No sound in webex player in wine [131010030020] |I am using Fedora Core 10. I have installed wine with both wine-alsa and wine-oss sound drivers. [131010030030] |I am able to hear test sounds by running winecfg, but when I run the webex player there is no sound, and the option to increase volume is disabled. [131010030040] |I am not sure whether the problem is with wine or with webex; how can I debug/fix this? [131010040010] |Screen status bar multiple lines [131010040020] |I am using screen with several tabs open to separate my projects between them. [131010040030] |However, when I open too many tabs they just appear off screen and I can no longer see them in my "screen status bar". [131010040040] |I can still switch to them, but not see them in my list of windows. [131010040050] |How can I make it so that my "screen status bar" will expand to two lines when necessary? [131010040060] |I've gotten a .screenrc from a friend to start with which put me where I'm at, but I'd like to customize it to afford me this option. [131010050010] |I turn off the status bar, myself, because that's not a pratical way to manage screen with 40+ windows. [131010050020] |Using Ctrl-A + " will open a list of all screens. [131010050030] |You can name individual windows with Ctrl-A + A. [131010050040] |I also use a customized .screenrc that, among other things, shows the Shell Title message in the Window listing. [131010060010] |Looking for non-portable, but amazingly efficient system calls [131010060020] |For example, on Linux there is sendfile(2), splice(2) and vmsplice(2) that do some very awesome things with DMA, direct memory access. [131010060030] |What are some of your favorites (one system call per response so folks that up vote others). [131010060040] |This isn't limited to just Linux, bring on BSD, AIX, Solaris, HP-UX or DMA. [131010070010] |tar through ssh session [131010070020] |Possible Duplicate: What does “-” mean as an argument to a command? [131010070030] |When I use this command: [131010070040] |What does the '-' mean? [131010070050] |And is this the correct interpretation of the above command? tar the tmp directory and ssh the tarball to test.com and untar it. [131010080010] |The option -f - (i.e. the - only makes sense in conjunction with -f) tells tar to use the standard input/output instead of a filename. [131010080020] |Which makes sense since you try to pipe the output to ssh. [131010080030] |Furthermore, tar allows writing shorthand options without the leading dash, which is why you (correctly) simply wrote tar f - instead of tar -f. [131010090010] |The "-" is a placeholder for stdout. [131010090020] |In this case you are piping the output from the one command into the ssh session. the ssh out is piped to the remote server terminal's stdout and is again delivered to the tar command for processing. [131010100010] |Does OpenBSD use bcrypt by default? [131010100020] |http://codahale.com/how-to-safely-store-a-password/ [131010100030] |with what does OpenBSD store the password by default? [131010100040] |They say bcrypt is way more secure then hashing. [131010100050] |I googled' it and obsd supports bcrypt, but does it use it by default? [131010100060] |Thank you! [131010110010] |From: http://www.openbsd.org/papers/bcrypt-paper.pdf [131010120010] |How many minutes have passed since the Unix Epoch ? [131010120020] |Sorry for the lame question [131010120030] |How many minutes have passed since the Unix Epoch ? [131010120040] |It should be January 1, 1970 ? [131010120050] |Let's say approximately until 1 Jan 2011... [131010120060] |5 865 696 000 minutes ? [131010140010] |You can check the online version of Unix Epoch: Epoch Converter. [131010140020] |You can get the current Epoch value and also convert time back to epoch [131010150010] |If you've got Python installed then you can run this: [131010160010] |change font-size, number of rows / columns on a terminal. [131010160020] |Hi, [131010160030] |I am running a linux server (without xwindows or any kind of GUI), with a modern 22" LCD monitor. [131010160040] |Given the huge size of my monitor I would like to increase the number of rows and columns on my terminal, how can i go about achieving it ? [131010160050] |**I am noobie, please pardon my ignorance. [131010170010] |Add something like "vga=792" to the kernel line in your grub.conf file, usually located in /etc or /boot/grub: [131010170020] |You can say vga=ask instead to get a menu on boot, built from a probing process performed by the kernel to see which resolutions are likely to work. [131010170030] |I find that there are often other numbers that will work that this method doesn't find. [131010170040] |This is all system-specific. [131010170050] |Different video cards will have different supported modes, and kernel build options can open up or close off video mode options. [131010170060] |The subsystem that deals with this is called the kernel framebuffer, so if you're compiling custom kernels, be careful not to remove the support your kernel needs to fully support your video card. [131010170070] |Most cards can use the VESA FB driver, but another driver specific to your brand of card might open more options. [131010170080] |Also, beware that some parts of this subsystem use hex numbers, and others decimal. [131010170090] |You can do the conversion to decimal, as I've done, or you can say something like "vga=0x318" instead. [131010180010] |How can I update the OS on an Iphone on a Linux machine [131010180020] |As I understand it, the iPhone will pop up some options on a Windows computer to update the system when you plug it into the computer. [131010180030] |Honestly, I don't own an iPhone, but my friend wants to use my computer to update hers, because it's not receiving data properly. [131010180040] |So is it possible for me to mount the phone and push some data to it? or otherwise update it? only relevant looking link I found via google was suggesting a VM, which more than I want to do. [131010180050] |Tutorial links are of course welcome. [131010180060] |Also please advise if there's a decent chance doing this this way could brick the phone. [131010190010] |Why ACPI Namespace keeps changing [131010190020] |Does anyone know why the ACPI namespace keeps changing? [131010190030] |I had a script a while back on a Red Hat system which read the CPU temperature from '/proc/acpi/thermal_zone/THRM'. [131010190040] |Now I have a new (but similar hardware) and relatively same distro (except for a few drivers here and there) and it has changed to '/proc/acpi/thermal_zone/TZ00' and '/proc/acpi/thermal_zone/TZ01'? [131010190050] |Is this even the CPU thermal? [131010190060] |Or is it for something else? [131010190070] |I know most will say, read the ACPI docs... but that's beyond the point. [131010190080] |Why and who keeps changing the namespace? [131010190090] |I've been using Linux since about '97 now, and I'm seriously fed up of EVERYTHING that keeps changing on me. [131010190100] |They talk about 'cheap' Total Cost of Ownwership? [131010190110] |Yah, right!!! [131010200010] |Is it safe to leave a enc folder mounted? [131010200020] |I was looking at this and ecryptfs seems cool. [131010200030] |I'd like to use it on my server to encrypt git pushes. [131010200040] |I dont want to ssh in everytime to mount/unmount the encrypted folder. [131010200050] |However once in a while when i reboot or whatever it is fine. [131010200060] |I like the idea that someone at my provider cant see my folder when scanning through a bunch of harddrives but is it relatively safe that the folder is always mounted? [131010200070] |I dont think the password would be in memory plaintext? nor can someone connect to my server due to it being on the network and be able to access files? (the enc/prv folder would be 700) [131010200080] |Do i really have anything to worry about if i leave it mounted all the time? [131010200090] |My server is debian 6 (squeeze) if thats interesting. [131010210010] |Leaving and encrypted filesystem mounted increase the attack surface, i.e., there are a few more places where an attacker can exploit a vulnerability and get access to your files. [131010210020] |
  • If the attacker can run code as your user, she can access your files. [131010210030] |If the encrypted filesystem wasn't mounted, she wouldn't have direct access to your files, but there's a good chance she'd be able to inject some kind of trojan (e.g. a keylogger) and obtain your passphrase eventually.
  • [131010210040] |
  • If the attacker can read the memory of your processes, she gets the secret key, which she can use to decrypt an offline copy of the files if she has one (e.g. from a stolen backup). [131010210050] |Your password doesn't remain in memory (hopefully, I haven't checked the code), but the secret key has to. [131010210060] |If the filesystem wasn't mounted, she wouldn't get anything. [131010210070] |But if she could read the memory of the mount process when it's mounting the filesystem, she would get the secret key then.
  • [131010210080] |
  • If the attacker can read files with your user's permissions, she gets the plaintext. [131010210090] |If the filesystem was not mounted, she would only get the ciphertext and the passphrase-encrypted secret key (which she could try to brute force with a password cracker).
  • [131010210100] |Overall the increase in the attack surface is slight. [131010210110] |Encfs can automatically unmount the filesystem after a period of inactivity (encfs -i MINUTES) (where activity means open files). [131010210120] |It's a good idea to use this if there's a risk that the computer will be physically stolen (mostly relevant for laptops). [131010210130] |Otherwise there is only a small gain, because most attack vectors let the attacker do worse things anyway. [131010220010] |How to create yaboot partition using a ppc Mac [131010220020] |My computer g4 1.5 ghz PowerBook ppc [131010220030] |I need to use this computer to make a yaboot partition on a USB stick. [131010220040] |Here is documentation for making a yaboot partition with a Linux machine: http://penguinppc.org/bootloaders/yaboot/doc/yaboot-howto.shtml/index.en.shtml [131010220050] |but how do I do this with Mac OS 10.5? [131010230010] |Cannot access network on the command line in Ubuntu 10.04 [131010230020] |Hi, [131010230030] |I am having an issue which I am unable to diagnose. [131010230040] |I am unable to access outside the local network from the command line. [131010230050] |Strangely, ftp works from the command line. [131010230060] |But ping, links, traceroute, wget or other utilities are unable to connect. [131010230070] |The network works fine from graphical browsers like firefox. [131010230080] |We have a network proxy at the workplace which I set using environment variables http_proxy and so on. [131010230090] |Any ideas on how could I diagnose this? [131010230100] |Thanks. [131010240010] |It sounds to me like there is a firewall in place blocking access to the outside world and the proxy server handles the required access to FTP and the web. [131010250010] |If your proxy is not blocking the other access you need, you may have to configure the proxy for all apps/services, not only for ftp and browsers. [131010250020] |For example, for apt you need to do: [131010250030] |or for authenticated access: [131010250040] |You probably have a way to configure the proxy for everything in your config or system menu. [131010260010] |It's possible that DNS isn't configured correctly. [131010260020] |You didn't give us example error messages from ping, traceroute, etc, or the value of the "http_proxy" environment variable. [131010260030] |If "http_proxy" just contains an IP address, and you're doing "ping some_fqdn", then it's entirely possible that /etc/resolv.conf doesn't have the correct contents in it, or that /etc/nsswitch.conf isn't correct. [131010270010] |Will there be third party compositing with Gnome 3, or will it be limited? [131010270020] |Will we still be able to use Compiz? [131010280010] |In a way yes: You will be able to switch away from all the fancy new stuff and just use the gnome-panels like you did with Gnome 2. [131010280020] |In this mode it should not be too difficult to replace the WM. [131010280030] |However, in standard, fancy mode you will only be able to use Mutter aka Metacity 3. [131010280040] |Gnome 3 is just too different, it uses lots and lots of composite effects to provide the overlay, animations and a new concept of workspace. [131010290010] |Where can I find the source code of "GFileInfo" functions like "g_file_info_get_content_type"? [131010290020] |I want to read the source code of a GFileInfo function: g_file_info_get_content_type ( GFileInfo *info ); [131010290030] |Could somebody tell me where to find the source code file? [131010290040] |I search the glibc code by didn't found the functions. [131010290050] |These function are introduced in this link: http://library.gnome.org/devel//gio/2.26/GFileInfo.html [131010300010] |Here's the file you are looking for. [131010300020] |Note that the page you linked is generated from it. [131010300030] |It's actually part of GLib (GTK+ Library) which is part of the GNOME project, but is used by a host of other software projects. [131010300040] |You might wanna get a git checkout for the sake of convenience. [131010310010] |Where can I find the source code of libgio? [131010310020] |Where can I find the source code of libgio.so? [131010310030] |I want to study how its GFileInfo components works. [131010310040] |Great thanks! [131010310050] |Amanda [131010320010] |The gio git repository [131010330010] |Where did you get libgio.so? [131010330020] |On most Linux distributions, there's an automatic way of retrieving the source code of a package. [131010330030] |For example, on Debian, Ubuntu and derived distributions, run dpkg -S to see what package libgio.so belongs to, then apt-get source to get the source code of that package. [131010330040] |Example ($ represents my shell prompt; on my system, the gio library is in a file called libgio-2.0.so): [131010340010] |git and remote security with Encfs [131010340020] |I wanted to create a user on my server to use as a private repository. [131010340030] |I'd like the files to be encrypted. [131010340040] |Encfs will timeout after minutes of inactivity so i was wondering. [131010340050] |Is there a way to have git ssh in, run certain commands to mount the encrypted folder, do what it needs then unmount it? [131010340060] |One thought is the password should not be on my server HD so i should either have it locally or a script on my server should create it based on the password i enter when logging in. [131010340070] |Maybe there is a better solution? [131010350010] |Maybe packing your repo in an GPG encrypted tar. [131010350020] |It's possible to delete the private key from your home each time, so your repo will be almost undecryptable. [131010350030] |Each time you log-in, you write the private key to your home, decrypt the repo, and use it. [131010360010] |Why does `htop` show more process than `ps` [131010360020] |In ps xf [131010360030] |In htop, it shows up like: [131010360040] |Why does htop show more process than ps? [131010370010] |For me, on a more-or-less current arch linux system, ps xf shows me only the processes owned by my user ID. htop shows me all processes. [131010370020] |Try ps -ef for a list of all processes, or perhaps ps -ejH to get a child/parent relationship listing. [131010380010] |By default, htop lists each thread of a process separately, while ps doesn't. Turn off the display of threads: in the “Setup / Display options” menu, “Hide userlands threads”. [131010380020] |This puts the following line in your ~/.htoprc (you can alternatively put it there manually): [131010380030] |(Also hide_kernel_threads=1, but it's 1 by default.) [131010390010] |Running continuous jobs remotely. [131010390020] |Hello everyone. [131010390030] |I'm new to Linux, and I'm probably asking a fairly basic question. [131010390040] |How do I run a continuous program on Linux? [131010390050] |Basically, I have a program that will continuously check for content on a website. [131010390060] |This program will be executing for several days. [131010390070] |I do not have administrative privileges on the computer I wish to run this program on, and I do not have exclusive access to this computer either, meaning that other users can log on physically. [131010390080] |Moreover, barring some untold disaster, the computer will be running for the entire duration of my program. [131010390090] |I know that I can SSH into the machine and run the program. [131010390100] |However, this is not desirable as I would need a second machine to do it (which kind of defeats the point of what I'm doing), not to mention timeout issues. [131010390110] |Is there a way I can run my program on the computer remotely without establishing a connection of some sort after I get it started? [131010400010] |You need to do two things: [131010400020] |
  • Run the command with nohup. [131010400030] |This stops the remote machine from terminating the command when you disconnect
  • [131010400040] |
  • Run the command in the background, by appending &. [131010400050] |This isn't strictly necessary, but will return you to a shell instead of blocking on the command so you can disconnect normally or do other things
  • [131010400060] |So for the command command: [131010410010] |nohup is definitely the right way to go if you can run your program without interacting with it: if it writes all its output and error messages into a file that you can get to later, for instance. check the nohup.out file to see any error messages the program left behind while running (thanks for the correction, Michael!) [131010410020] |If you need to interact with the program while it's running, GNU Screen might be the better way to go about it. [131010410030] |It lets you create a virtual terminal which remains running even after you log out, along with any programs you started in it, as long as nobody shuts the computer down. [131010410040] |Here's a tutorial I found on using it. [131010420010] |If it is something that needs to happen at regularly scheduled intervals use cron( e.g. you need to check the website once every hour, or once every day, or more or less frequently than that but still not arbitrarily defined). [131010420020] |However... [131010420030] |You may want to run a command at a cerain later time rather than right now, for that you want to use the at daemon, which allows you to run a command once at a later date/time (like it's 5 o'clock and you want to go home but you've got a 4 hour process that would be best run in the middle of the night, and it's not reoccurring). [131010420040] |I will say nohup, screen , tmux have been mentioned, use nohup if you want to run it right now but don't want to reconnnect to that session to check on it later. screen/tmux are for checking on it later. [131010430010] |Make a symbolic link to a relative pathname [131010430020] |I can do this: [131010430030] |But I'd like to be able to do this: [131010430040] |Is this possible? [131010430050] |Can I somehow resolve the relative pathname and pass it to ln? [131010440010] |If you create a relative path to a symbolic link, it will store it as a relative symbolic link, not absolute like your example shows. [131010440020] |This is generally a good thing. [131010440030] |Absolute symbolic links don't work when the filesystem is mounted elsewhere. [131010440040] |The reason your example doesn't work is that it's relative to the parent directory if the symbolic link and not where ln is run. [131010450010] |You could try: [131010450020] |But, it makes a symbolic link to the absolute pathname. [131010450030] |Your textual question asks mentions relative pathname... [131010450040] |Normally, a relative pathname is what you want, and what ln -s gives you. [131010450050] |I think what you want is: [131010460010] |Sorry, no. Symbolic links are relative to the location the link is in, not the location you were when you created the link. [131010460020] |There are several good reasons for this behavior, most of which involve mounting remote filesystems.