[131086150010] |What client for DECwindows? [131086150020] |I have inherited a VAX system that was just migrated onto CHARON-VAX (VMS emulated on Windows) I can access it via ssh, but not with X. Anyone here familiar with DECwindows? is it X compliant? [131086150030] |Or do I need a specific client to access it? [131086160010] |I had a similar problem on an ubuntu box could not run X11 over ssh. [131086160020] |port forwarding can be disabled on client and server, so may need to check in /etc and ~ of server. [131086160030] |There is something in the authorized keys file, and some thing in a config file, from what I remember. [131086170010] |Using the system date / time in a Cron Script [131086170020] |Good afternoon, [131086170030] |I'm setting up a Cronjob that will backup a MySQL database I have in my server, but I don't want it to keep overwriting the same file over and over again. [131086170040] |Instead, I want to have an array of backups to choose from, done automatically. [131086170050] |For example: [131086170060] |And so on. [131086170070] |Is there any way that I can use the system date and/or time as some kind of variable in my Cronjob? [131086170080] |If not, what are your suggestions to accomplish the same? [131086180010] |You should be able to use date. [131086180020] |Type info date or man date for details. [131086180030] |Something like the following might suit you (change the date format to your needs) [131086190010] |You could try something like this: [131086190020] |To see if your particular cron will run the command out of crontab as a script in and of itself, or if you need to write a script that figures out the date as a string, and then runs your mysqldump command. [131086190030] |"cron" on Redhat Enterprise Linux 5.0 (I think) did not like the "$()" construct, and kept giving me errors about not finding a matching ')'. [131086190040] |NOTE: As glenn jackmann notes below, you have to escape all '%' characters. [131086190050] |works under RHEL 5.0. [131086190060] |I would also take the recommendation to use ISO8601 date format (yyyy-mm-dd) to make the file names order by date when sorted lexically. [131086200010] |Here's the bash script I used: [131086200020] |Files look like: [131086200030] |Point the cron job at this to run nightly or whatever you prefer. [131086210010] |Relaunch application once finished [131086210020] |Is there a command that relaunch the application once it finishes from the command line? [131086210030] |Letting you do something like: [131086210040] |If not, then what's my best option? [131086210050] |I know I could cron it, but I'd be more interested in something I could just execute from the terminal and that restarts at once. [131086210060] |I'm on Debian if that matters. [131086220010] |You can try with a simple infinite loop: [131086220020] |Edit: the above is just a simple generic example. [131086220030] |Most probably modifications are needed to take into account exit errors etc. [131086220040] |For example: [131086230010] |If you want to stop on an error: [131086240010] |How do I add PC BSD / FreeBSD to Grub 2 boot loader? [131086240020] |I have Ubuntu 10.04 installed as my primary operating system, and I installed PC BSD in a different partition: /dev/sda4 without installing it's boot loader. [131086240030] |I figured out that I need to edit /etc/grub.d/40_custom to add an entry for PC-BSD. [131086240040] |So far, nothing seems to work, though. [131086240050] |EDIT: this sort of works, but doesn't fully boot the OS, it then asks me for the MOUNTROOT partition. [131086240060] |The selected answer below is correct. [131086240070] |If you are dual-booting with Linux I suggest NOT installing the PC-BSD bootloader as the documentation suggests, unless you enjoy pain. [131086250010] |Hi postfuturist, this is what I have in /etc/grub.d/40_custom. [131086250020] |Works for me :) Just remember to subsitude hd0,3 with your correct entry [131086260010] |What's the difference between "export" and "setenv"? [131086260020] |What's the difference between export and setenv? [131086270010] |there is none but: [131086270020] |setenv is the name of the command in the *csh family of shells [131086270030] |export is the name of the command in the "other" family of shells (ash, bourne, bourne again, zsh) [131086270040] |and, ok, the syntax is slightly different. but other than that? none. [131086280010] |Recommend a Linux Distribution for my use. [131086280020] |Hi, [131086280030] |I have no experience with Linux. [131086280040] |I'm looking for a distribution which is beginner friendly. [131086280050] |Ubuntu looks great. [131086280060] |My usage for this system will include web development, and possibly application development. [131086280070] |However, I would prefer a system where I can design my own theme, alter the interface. [131086280080] |Think "theming". [131086280090] |What would you recommend? [131086280100] |EDIT I'm looking to run this on a 2008 MacBook Pro 2.4Ghz C2D, 2GB GDDR3. [131086280110] |Will it be okay performance wise through virtualization such as VMWARE or should I use BootCamp? [131086290010] |Ubuntu is a good choice for a first distribution, if you want something you can get up-and-running quickly and easily. [131086290020] |You might also consider fedora as well. [131086290030] |You can certainly theme an Ubuntu installation. [131086290040] |See this thread for a good starting point - HowTo: theme your desktop [131086300010] |Ubuntu offers all of that: [131086300020] |
  • It was designed from the on-set to be a newbie-friendly Debian; I've used both, and it certainly is easier, at least on the surface (i.e. the basic stuff).
  • [131086300030] |
  • It has some of the largest collection of software of all distros; this includes a whole bunch of development stuff (all major programming languages, a whole bunch web frameworks, and a lot of developer libraries and tools).
  • [131086300040] |
  • You can even change Desktop Environments if you please, or theme the default one.
  • [131086300050] |
  • A very large user base (it's the most popular distro for several months now), and therefore huge resources at your disposal. [131086300060] |It even has a dedicated Stack Exchange site.
  • [131086300070] |
  • It officially supports a number of CPU architectures, including yours.
  • [131086310010] |Unix systems tend to favor text files, often consisting of one record per line. [131086310020] |Most unix configuration files are text files. [131086310030] |Unix systems come with many tools to manipulate such files. [131086310040] |Most tools process the file in a stream: read a line, process it, emit the corresponding output; this makes it possible to chain scripts with pipes. [131086310050] |Use this tag when your question is about processing text files and you're not sure which tool to use. [131086310060] |If your question is about a specific tool, use its tag. [131086310070] |

    Text processing utilities

    [131086310080] |
  • sed a simple line-by-line text processor, mostly used for regexp substitutions.
  • [131086310090] |
  • awk a scripting language dedicated to text file processing
  • [131086310100] |Text processing often involves combining many single-purpose tools, such as: [131086310110] |
  • cut select fields on each line
  • [131086310120] |
  • diff compare two files line by line
  • [131086310130] |
  • grep search a pattern in text files
  • [131086310140] |
  • od display binary files in decimal, octal or hexadecimal
  • [131086310150] |
  • sort sort lines or fields alphabetically
  • [131086310160] |
  • split split a file into fixed-size pieces
  • [131086310170] |
  • tail show the last few lines of a file; tail -f keeps the file open in case more data arrives
  • [131086310180] |
  • tee replicate the output of a command and send it to several destinations
  • [131086310190] |For a list of many text utilities and more, check out busybox commands or GNU coreutils. [131086310200] |

    Other related tags

    [131086310210] |
  • shell text processing is usually performed by shell scripts that calls the tools described above
  • [131086310220] |
  • pipe many tasks require chaining several tools
  • [131086310230] |
  • coreutils the collection of GNU utilities (text processing and others), for regular Linux systems
  • [131086310240] |
  • busybox a collection of utilities (text processing and others) for embedded Linux systems
  • [131086310250] |
  • perl python ruby when the going gets tough, it's better to switch to more general languages
  • [131086320010] |Manipulation of text by programs, scripts, etc [131086330010] |Getting PHP to work with apache to run .php files through browser [131086330020] |I have VPS running Debian 5.0 (I think) and I would like to get it to run PHP files. [131086330030] |I was told it needed to be configured with Apache. [131086330040] |I tried entering the command apt-get install apache2 php5 libapache2-mod-php5. [131086330050] |But there was no change. [131086330060] |Console output: http://pastebin.com/sVMWq6mA [131086330070] |This is everything in my /etc/apache2/mods-enabled: http://img35.imageshack.us/img35/6474/modsb.jpg [131086330080] |My webserver can be accessed here: http://206.217.223.136/test/ In my test.php file I have the code : [131086330090] |but instead of displaying the page, it tries to download it. [131086330100] |How can I fix this? [131086340010] |I think you probably missed restarting apache2... [131086340020] |You will have to run this command from your root account, or put sudo on the beginning of it. [131086340030] |For full details on installing PHP5 - use the official php5 on debian instructions HERE [131086350010] |Run below commands again and check... [131086360010] |programmatically creating a remote directory using ssh [131086360020] |Using a shell script, how can I create a directory remotely from serverA -> serverB using ssh? [131086360030] |Can I pass the create directory command with the ssh command? [131086360040] |How? [131086360050] |I'm basically looking to create a directory remotely based on date (ex. 20110304), if it doesn't already exist. [131086370010] |you can accomplish it like this: [131086370020] |or If you have a script on serverA.... [131086380010] |don't know what you mean exactly by "by date", but you could test before mkdir as below [131086380020] |if you want a directory name as you mention, try [131086390010] |If you want to generate a date programmatically, take a look at date +format: [131086400010] |how to use secured port using ssh [131086400020] |We have a process that is using a working secured port using this scp command like so: [131086400030] |/usr/bin/scp -P 1234 -i /path/key_rsa /home/path/filename.txt user@remotehost:/tmp [131086400040] |I'm working on a schell script, that will use ssh, but not sure how to use that same port of 1234. [131086400050] |I tried /usr/bin/ssh -D 1234 remotehost '. ~/my_profile; mkdir /test' and its asking for the user pw. [131086400060] |I know the keys are already established since the scp works fine for that user. [131086400070] |What am I doing wrong? [131086400080] |Thanks. [131086410010] |You want to use the -p option; -D is for dynamic port forwarding, that is to say creating port forwardings on an existing connection. [131086420010] |Script that keep reading a stream [131086420020] |Sometimes I cat a stream like /dev/input/event0. [131086420030] |I want to write a script that does something every time there is more output. [131086420040] |The definition of more output might be every time it reads a byte. [131086420050] |How can that be done? is there some command that does it? [131086430010] |From a shell script you will be limited to complete lines. [131086430020] |You'll need to use C/Perl/Python/whatever for finer grained reading. [131086440010] |A variation on geekosaur's answer: You might want to try read -n 1 byte to read one byte at a time, then do something with $byte. [131086440020] |EDIT: [131086440030] |Just tried this as I had never used that command before (just looked up info bash), but it seems to munch all white space and line endings. [131086440040] |I don't have an explanation for this yet. [131086440050] |Try the following scripts to fine tune command arguments: [131086440060] |So unfortunately this does not give the expected result. [131086440070] |EDIT (with Chris' help): [131086440080] |This gives exactly the expected result. [131086440090] |Note: whether I use -n, -N, or -rN does not change the result, it's all good (with text, I did not test the limitation that Chris talks about: 0x00 and 0xff). [131086450010] |How can I cut a large file in place? [131086450020] |I have a very big text file, about 80GB, and I need to cut a part form it that lies between two given lines. [131086450030] |The part I need is not big, and I have not enough space left on the hard drive to do things like: [131086450040] |How do I do that? [131086460010] |Pipe one to the other: [131086470010] |Replace 3 and 10 with your range of lines. [131086470020] |The sed commands basically says print (p) everything between lines 3 and 10. [131086470030] |The -n tells it to do it quietly, otherwise it prints out the input as its reading the file. [131086480010] |Installing individual packages from Solaris .iso [131086480020] |I need to update the following packages: [131086480030] |I read that the updated versions of these packages can be found on the latest version of Solaris, and that it's possible to just install the specific packages I want. [131086480040] |I can only find them as directories in the .iso and they all have the following structure: [131086480050] |Is it possible to install these as some sort of package? [131086480060] |Or turn them into a .pkg? [131086480070] |Thanks for any help. [131086490010] |They already are a package - simply use pkgadd -d /path/to/cdrom/Product SUNWjaf SUNWjato SUNWjmail or whatever the parent directory is containing those subdirectories. [131086500010] |How do I add a new user to an embedded Linux system by hand? [131086500020] |I have a system which was designed for use by root only and I want to run an FTP server on it. [131086500030] |The software I use handles authentication by using same username and password as OS itself. [131086500040] |To me it looks safer to have another user for FTP data transfer and another which would just run FTP server. [131086500050] |So here's my problem: As I've said, system was designed to be used only by root and there's no useradd or anything similar, as far as I can see. [131086500060] |Is it possible to add user by hand? [131086500070] |I'm running OpenWRT Backfire 10.03.1-rc4, if it matters, but generic answers would be best. [131086510010] |Very roughly: [131086510020] |You should of course make sure the username, user id, and group id are available. [131086510030] |Also, unless the account needs shell access for some reason, set the shell to something like /bin/false or /sbin/nologin (if the latter is available). [131086520010] |removing startup item from com.apple.launchd [131086520020] |Hi Guys, [131086520030] |Im not sure if this is the right place for this kind of question, but here it goes anyways... [131086520040] |I installed a program a few months ago, it had a startup option that I did install. [131086520050] |Later, I decided to remove the program, but it seems to have left the startup script or item even after I uninstalled it.... and its trying to start every few seconds. [131086520060] |How can I remove this item? [131086520070] |Here is it from the console [131086530010] |Oh dear, that one turns out to be evil. [131086530020] |The evil part is they have no business putting it in /System/Library/LaunchDaemons, which is reserved for services provided with OSX; it should have been in /Library/LaunchDaemons. [131086540010] |Installing mplayer in linux [131086540020] |i dont know if this is the right forum to ask,i have started to use linux os (opensuse). i need to install mplayer on it.i unpacked the mplayer archive.then in the command screen i navigated to the folder containing the unpacked folder of mplayer.then i used the following commands [131086540030] |but after that when i entered the make command it is showing ...typo doesnt exist or so...i think only after make and make install i can install this program to my system...pls help [131086550010] |You'd probably be better served by [131086550020] |If you really want to compile from source, you'll need to install a compiler and toolchain. [131086550030] |You will also need to install various development library packages; which ones depend on the version of mplayer and what features you want to use. [131086550040] |In general, you're better off with the pre-built one; mplayer is an absolute beast to build. [131086560010] |I was about to say the same thing as @geekosaur, but noticed in the comments that you don't have internet connection on that PC. [131086560020] |In general, on Linux software is divided into packages (mplayer, sudo, zypper are examples of packages). [131086560030] |Packages are stored in repositories and have dependencies on other packages. [131086560040] |To save you the task of managing them, there are package managers, such as zypper. [131086560050] |You don't normally install software from source, but use the package manager that comes with your system. [131086560060] |Now the big problem is that you don't have internet connection on the computer that you want to install software on. [131086560070] |If possible, it's a lot easier to plug the cable in and let zypper download what it needs. [131086560080] |If that's not possible, most package managers have the ability to install from a local repository. [131086560090] |I'm not a SUSE user, but from the documentation you can download the required .rpm files to make a local repository, then tell zypper about it: [131086560100] |After that you can install mplayer without internet connection: [131086560110] |If zypper then tells you that it needs to install other packages as dependencies (and it will fail because there is no internet connection), you will have to look for the RPM files it need, download and put them in my/dir/with/rpms (BTW that's a fake path, change it to whatever path you store the files). [131086570010] |IPTABLES rule for separating users [131086570020] |I have an OpenWrt 10.03 router [ IP: 192.168.1.1 ], and it has a DHCP server pool: 192.168.1.0/24 - clients are using it through wireless/wired connection. [131086570030] |Ok! [131086570040] |Here's the catch: I need to separate the users from each other. [131086570050] |How i need to do it: by IPTABLES rule [ /etc/firewall.user ]. [131086570060] |Ok! [131086570070] |"Loud thinking": So i need a rule something like this [on the OpenWrt router]: [131086570080] |- DROP where SOURCE: 192.168.1.2-192.168.1.255 and DESTINATION is 192.168.1.2-192.168.1.255 [131086570090] |The idea is this. [131086570100] |Ok! [131086570110] |Questions! [131086570120] |- Will i lock out myself if i apply this firewall rule? [131086570130] |- Is this a secure method? [ is it easy to do this?: hello, i'm a client, and i say, my IP address is 192.168.1.1! - now it can sniff the unencrypted traffic! :( - because all the clients are in the same subnet! ] [131086570140] |- Are there any good methods to find/audit for duplicated IP addresses? [131086570150] |- Are the any good methods to find/audit for duplicated MAC addresses? [131086570160] |- Are there any good methods to do this IPTALBES rule on Layer2?: $ wget -q "http://downloads.openwrt.org/backfire/10.03/ar71xx/packages/" -O - | grep -i ebtables $ [131086570170] |p.s.: The rule would be [is it on a good chain?]: iptables -A FORWARD -m iprange --src-range 192.168.1.2-192.168.1.255 --dst-range 192.168.1.2-192.168.1.255 -j DROP Thank you! [131086580010] |If you want to separate wireless and wired users why not match the interfaces? [131086580020] |Assuming ppp0 is facing the internet, eth0 is your local LAN and wlan0 is the wireless: [131086580030] |If you use this: [131086580040] |
  • nothing can be connected from the internet
  • [131086580050] |
  • wireless users can only connect to the internet
  • [131086580060] |
  • wired users can only connect to the internet
  • [131086580070] |
  • you can enforce separate IP ranges if you add the --src-range
  • [131086580080] |If your DHCP server is running on the OpenWrt device then the FORWARD chain will not affect that in any way. [131086580090] |To allow the DHCP server use [131086580100] |I generally allow everything in OUTPUT except a few types of ICMP and spam. [131086580110] |But you might prefer the safer default DROP so here is the specific rule: [131086580120] |It makes more sense on a router which is not supposed to connect to everything. [131086580130] |I would advise against MAC filtering in my experience it adds no security only inconvinience. [131086580140] |But if you want to see: [131086580150] |Logging MAC addresses could be useful but they are easily forged. [131086580160] |Just add -j LOG or -j NFLOG before the ACCEPT rule with the same matching rules. [131086580170] |Since you are configuring a computer which is only accessible from the network you should be very careful not to lock yourself out. [131086580180] |You can't just walk to it and delete the rules manually. [131086580190] |In particular typing iptables -P INPUT DROP with an empty INPUT chain will kill your SSH session. [131086580200] |I recommend using the iptables-save and iptables-restore and writing the rules in a config file. [131086580210] |It also helps if you can test the rules on a computer with a keyboard and monitor before trying it on the router. [131086590010] |How to test file system correction done by fsck. [131086590020] |How to make sure that fsck is correcting corruptions while keeping the integrity of file system? [131086590030] |Suppose clients are writing lots of files through NFS to server, and there happens something that caused corruption (dirty shutdown/other kernel panic). [131086590040] |So the current state of the filesystem is not known (partial write, etc). [131086590050] |Then if we run fsck, it corrects corruptions (e.g. invalid blocks), and the filesystem is now supposed to be up to date. [131086590060] |How do I make sure my filesystem *i*s up to date? [131086590070] |In common, I would use diff or dt to check again with the source from which the files were being written. [131086590080] |But in this case, let's say that the source is no longer present after writing the files. [131086600010] |-N Don't execute, just show what would be done. [131086600020] |Again, you would just do something along the lines of: shell> fsck -N /dev/sda1 [131086610010] |Fsck returns your filesystem to a consistent state. [131086610020] |This is not necessarily the filesystem's “latest” state, because that state might have been lost in the crash. [131086610030] |In fact, if there were half-written files at the time of the crash, then the filesystem was not left in a consistent state, and that is precisely what fsck is designed to repair. [131086610040] |In other words, after running fsck, your filesystem is as up-to-date as it can get. [131086610050] |If your application requires feedback as to what is stored on the disk in case of a crash, you'll need to do more work than just writing to a file. [131086610060] |You need to call sync, or better fsync, after a write operation to ensure that that particular write has been committed to the disk (but if you end up doing this a lot, your performance will drop down, and you'll want to switch to a database engine). [131086610070] |You'll need a journaled filesystem configured for maximum crash survival (as opposed to maximum speed). [131086610080] |The property that an operation (such as a disk write) that has been performed cannot be undone (even in the event of a system crash) is called durability. [131086610090] |It's one of the four fundamental properties of databases (ACID). [131086610100] |If you need that property, read up on transactions. [131086610110] |Although filesystems are a kind of database, they're usually not designed to do well with respect to ACID properties: they have more emphasis on flexibility. [131086610120] |You'll get better durability from a dedicated database engine. [131086610130] |Then consider what happens in case your disk, and not your system crashes: for high durability, you also need replication. [131086620010] |tripwire report - inode number [131086620020] |Hi I am investigating tripwire and have stumbled upon something which i am unsure about. in a tripwire report generated after i modified hosts.deny to include an extra # I noticed the inode number changed from 6969 to 6915. [131086620030] |I would like to know why this happened. [131086620040] |I know inodes are records which store data about where data is stored on the file system, but would like to know why this number changed for a simple # being inserted. [131086630010] |Standard behavior for text editors is to rename the original file to a temporary name before writing out changes, so if there is a problem (such as out of disk space) you don't lose the file entirely. [131086630020] |Thus the file gets a new inode number. [131086630030] |If the editor is configured to leave the original as a backup file, you'll find the backup file has the original inode number; if not, then the backup will have been deleted after the new file was successfully written. [131086640010] |Where to handle packets between clients? [131086640020] |"Server": 192.168.1.1 [131086640030] |I want to "theoretically" disable that the clients can "ping" each other. [131086640040] |Can i use an iptables rule for it? e.g.: [131086640050] |Is it true that i cannot filter traffic between the clients?? [or at least redirect these packets to e.g.: the router?] [131086640060] |If i run tcpdump on the router ["server"] i can see that a client [192.168.1.201] is pinging another [192.168.1.162] [131086650010] |Where do you want to disable it? [131086650020] |If all traffic runs through a router or switch that can run iptables then yes, it is simple. [131086650030] |If you want to block it on each machine, and they all run iptables, then yes, again - simple. [131086650040] |On most TCP/IP implementations you can disallow ICMP at the client end. [131086650050] |Almost all routers that allow access controls will let you block ICMP. [131086650060] |BUT...are you 100% certain you want to? [131086650070] |A lot of apps really like a wee bit of ping to keep em happy :-) [131086660010] |How to tweak Linux to run reliably on flash memory? [131086660020] |Since flash memory only has a limited number of writes, what tweaks are necessary for installing a Linux system onto flash media so that the OS can run reliably for a long period of time? [131086660030] |Some examples of flash memory installations include burning a Linux image onto a wireless router's flash memory, or installing a linux distro onto a box that uses an SD card for it's hard drive. [131086660040] |Also, besides wireless router firmware (OpenWRT, DD-WRT, etc) which presumably already implements such tweaks, are there any general-purpose distributions that either make these tweaks or allow you to use them as an option? [131086670010] |The /tmp and /var directories are the ones that many system programs write to a lot, and depend on being writeable. [131086670020] |Minimizing writes to these directories, or configuring Linux to mount these directories on external storage devices that are replaceable, as opposed to on board flash, would go a long way towards accomplishing your goal. [131086670030] |/home and swap partition should be treated the same way. [131086670040] |rsyslogd, the default syslogd in Debian and many Debian-derived distros, has the capability to not write logs to disk, but ship them over a network connection, and only write them to storage if an internal buffer gets full. [131086670050] |Implementing this (which I'm trying to figure out how to do currently in a good way) could eliminate a lot of flash writes. [131086670060] |Also, you want to mount your file systems with the noatime option which prevents Linux from updating the access time on each file you touch. [131086670070] |This can also eliminate a lot of writes and speed up performance. [131086670080] |I think there's also a kernel parameter that controls the time interval between Linux's automatic sync call. [131086670090] |If your system doesn't expect to experience sudden power outages you could set that to a higher value than the default of 5 seconds (I think). [131086680010] |It would be more proper to say Flash Memory has only a limited number of erase cycles, these caused eventually by writes. [131086680020] |There are many good articles available about this distinction. [131086680030] |When you mention burning a Linux image into router firmware, that is probably NOR flash or an EEProm. [131086680040] |NOR is the type of flash with quicker reads, NAND the type with quicker writes. [131086680050] |Under ext3, the journal is the most frequently written file, and those writes will eventually fill a block, forcing the erase of another block. [131086680060] |Setting a larger commit= value on mount would gather these journal writes into larger chunks. [131086680070] |Finally, to echo other solutions, mounting with noatime is a standard practice that will reduce impact. [131086690010] |Why are PATH variables different when running via sudo and su? [131086690020] |On my fedora VM, when running with my user account I have /usr/local/bin in my path: [131086690030] |And likewise when running su: [131086690040] |However, when running via sudo, this directory is not in the path: [131086690050] |Why would the path be different when running via sudo? [131086700010] |sudo bash is starting a completely new shell. su doesn't do this unless you use the - option, I think. [131086700020] |bash when invoked will run commands in ~/.bash_profile and ~/.bashrc. [131086700030] |There's likely a PATH=... command in one of those files in /root or wherever Fedora puts the root user's home directory. [131086710010] |In most linuxes, you install programs via the package management, and get updates in a regular way. [131086710020] |If you install something circumventing the package management it will be installed in /usr/local/bin (for example, or .../sbin, or /opt) and not get regular updates. [131086710030] |I guess therefore the programs aren't considered to be that secure, and not put into roots PATH by default. [131086720010] |I've just tried this out for myself and I didn't see the behaviour you were seeing - my path remained the same, so maybe your sudo configuration is different. [131086720020] |If you check man sudoers you'll see there is an option called secure_path which resets PATH - it sounds like this option might have been enabled. [131086730010] |Take a look at /etc/sudoers. [131086730020] |The default file in Fedora includes this line: [131086730030] |Which insures that your path is clean when running binaries under sudo. [131086730040] |This helps protect against some of the concerns noted in this question. [131086730050] |It's also convenient if you don't have /sbin and /usr/sbin in your own path. [131086740010] |Because when you use sudo bash, bash doesn't not act as a login shell. [131086740020] |Try again with sudo bash -l and you should see the same result as su -. [131086740030] |If that is correct, then the difference in PATH lies in the configuration files: /etc/profile, ~/.bash_profile, ~/.bash_login, ~/.profile are executed (in that order) for a login shell, while ~/.bashrc is executed for a non-login interactive shell. [131086750010] |The command su - will execute the root users profile and take on that user's environment including path etc. sudo does not do that. [131086750020] |If you'd like sudo to behave like su - then use the option sudo -i [command which will execute the user's profile [131086750030] |If you'd like su - to behave like sudo then don't use the hyphen - just use su [command] [131086760010] |gnu find and masking the {} for some shells - which? [131086760020] |The man page for gnu find states: [131086760030] |That's from the man to find (GNU findutils) 4.4.2. [131086760040] |Now I tested this with bash and dash, and both don't need to have the {} being masked. [131086760050] |Here is a simple test: [131086760060] |Is there a shell, for which I really need to mask the braces? [131086760070] |Note, that it doesn't depend upon whether the file found contains a blank (invoked from bash): [131086760080] |This changes if the found file is passed to a subshell: [131086760090] |in contrast to: [131086760100] |but that's not what the man page is talking about, is it? [131086760110] |So which shell treats {} in a different way? [131086770010] |In a word, csh. bash and other modern shells recognize that the user probably isn't asking for a null brace expansion. [131086770020] |(Modern csh is actually tcsh and may also handle {} sanely by now.) [131086780010] |Summary: If there ever was a shell that expanded {}, it's really old legacy stuff by now. [131086780020] |In the Bourne shell and in POSIX-compliant shells, braces ({ and }) are ordinary characters (unlike ( and ) which are word delimiters like ; and &, and [ and ] which are globbing characters). [131086780030] |The following strings are all supposed to be printed literally: [131086780040] |A word consisting of a single brace is a reserved word, which is only special if it is the first word of a command. [131086780050] |Ksh implements brace expansion as an incompatible extension to the Bourne shell. [131086780060] |This can be turned off with set +B. [131086780070] |Bash emulates ksh in this respect. [131086780080] |Zsh implements brace expansion as well; there it can be turned off with set +I or setopt ignore_braces or emulate sh. [131086780090] |None of these shells expand {} in any case, even when it's a substring of a word (e.g. foo{}bar), due to the common use in arguments to find and xargs. [131086780100] |Single Unix v2 notes that [131086780110] |In some historical systems, the curly braces are treated as control operators. [131086780120] |To assist in future standardisation activities, portable applications should avoid using unquoted braces to represent the characters themselves. [131086780130] |It is possible that a future version of the ISO/IEC 9945-2:1993 standard may require that { and } be treated individually as control operators, although the token {} will probably be a special-case exemption from this because of the often-used find {} construct. [131086780140] |This note was dropped in subsequent versions of the standard; the examples for find have unquoted uses of {}, as do the examples for xargs. [131086780150] |There may have been historical Bourne shells where {} had to be quoted, but they would be really old legacy systems by now. [131086780160] |The csh implementations I have at hand (OpenBSD 4.7, BSD csh on Debian, tcsh) all expand {foo} to foo but leave {} alone. [131086790010] |/usr/bin vs /usr/local/bin on Linux [131086790020] |Why are there so many places to put a binary in Linux? [131086790030] |There are atleast these five: [131086790040] |
  • /bin/
  • [131086790050] |
  • /sbin/
  • [131086790060] |
  • /usr/bin/
  • [131086790070] |
  • /usr/local/bin/
  • [131086790080] |
  • /usr/local/sbin/
  • [131086790090] |And on my office box, I do not have write permissions to some of these. [131086790100] |What type of binary goes into which of these bins? [131086800010] |The sbin directories contains programs which are generally system administration only. [131086800020] |Programs for regular users should never go in them. [131086800030] |A few programs are needed during startup, and end up in /bin/ or /sbin/. [131086800040] |These must be available before files systems are mounted. [131086800050] |Things like mount, and fsck that are required to check and mount files systems must be there. [131086800060] |Most packaged programs end up in /usr/bin/ and /usr/sbin/. [131086800070] |These may be on a file system other than the root file system. [131086800080] |In some cases they may be on a network mounted drive. [131086800090] |Local programs and scripts belong in /usr/local/bin/ and /usr/local/sbin/. [131086800100] |This identifies them as clearly non-standard, and possibly only available on site. [131086800110] |For further explanation try running the command man hier which should provide a description of the recommended file system hierarchy for your distribution. [131086800120] |You may also want to read about the File System Hierarchy on Wikipedia [131086810010] |
  • /bin (and /sbin) were intended for programs that needed to be on a small / partition before the larger /usr, etc. partitions were mounted. [131086810020] |These days, it mostly serves as a standard location for key programs like /bin/sh, although the original intent may still be relevant for e.g. installations on small embedded devices.
  • [131086810030] |
  • /sbin, as distinct from /bin, is for system management programs (not normally used by ordinary users) needed before /usr is mounted.
  • [131086810040] |
  • /usr/bin is for distribution-managed normal user programs.
  • [131086810050] |
  • There is a /usr/sbin with the same relationship to /usr/bin as /sbin has to /bin.
  • [131086810060] |
  • /usr/local/bin is for normal user programs not managed by the distribution package manager, e.g. locally compiled packages. [131086810070] |You should not install them into /usr/bin because future distribution upgrades may modify or delete them without warning.
  • [131086810080] |
  • /usr/local/sbin, as you can probably guess at this point, is to /usr/local/bin as /usr/sbin to /usr/bin.
  • [131086810090] |In addition, there is also /opt which is for monolithic non-distribution packages, although before they were properly integrated various distributions put Gnome and KDE there. [131086810100] |Generally you should reserve it for large, poorly behaved third party packages such as Oracle. [131086820010] |I recommend taking a look at the file system hierarchy man page: man hier [131086820020] |which is lso available online, for instance http://linux.die.net/man/7/hier [131086830010] |The Filesystem Hierarchy Standard entry in Wikipedia helped me answer the same question when I had it, plus it has a very explanatory table. [131086840010] |Can't burn DVD -- incompatible format [131086840020] |I have been trying to burn a few ISOs lately with no success and have certainly used this hardware in the past, so I am not sure what's going on. [131086840030] |I am using DVD-R media (tried multiple discs) which appears to be supported by my drive. [131086840040] |Here is the error I received in GnomeBaker on Fedora 14 (64): [131086840050] |...and this: [131086840060] |more info: [131086840070] |Does anyone have any ideas as to what caused this? [131086840080] |I just reinstalled Fedora today (for other reasons) and I am still unable to burn a DVD. [131086840090] |I am using Memorex DVD-R discs, and have tried different ones. [131086840100] |The drive appears to read media just fine. [131086840110] |Thanks! [131086850010] |Selecting the right GRUB [131086850020] |I've just installed Backtrack on my harddrive (got one), i also got Fedora and Windows 7. [131086850030] |However, now i get the Backtrack-GRUB instead of my Fedora GRUB. [131086850040] |How do i change that? [131086850050] |I got a sda5 containing my Fedora GRUB so it should be easy to 'rewire' - i don't know how tho. [131086860010] |This is my first instinct as a long-time Gentoo user: [131086860020] |Mount the partition(s) with grub on it: [131086860030] |and copy the relevant section in $FEDORA/boot/grub/grub.conf into your Backtrack grub.conf. [131086860040] |Not Fedora, but quick and easy. [131086860050] |Alternatively: [131086860060] |Mount the proc filesystem so that Fedora will see it too: [131086860070] |Chroot into Fedora: [131086860080] |Here should come some magic to sanitize the chroot environment, I have no idea how that should look like in Fedora. [131086860090] |This is Gentoo: [131086860100] |Now, fire up grub: [131086860110] |Grub commands copied from the Gentoo page: [131086860120] |Here, (hd0) is the first hard drive, and (hd0,0) is the first partition on it. /dev/sda5 usually comes out as (hd0,4), but make sure to double-check everything as you go along. [131086860130] |Grub has auto-complete, so it should be easy. [131086860140] |A word of advice, whichever route you take: when you remove either distro, make sure you remember where your system boots from! [131086870010] |Backtrack probably overwrite the MBR with its records, thus causing the Backtrack GRUB to show up instead of the Fedora one. [131086870020] |Now to bring back the Fedora GRUB you need to tell grub to create appropriate records in the MBR. [131086870030] |See the question restore suse grub for how to restore GRUB. [131086870040] |You didn't mention if you are using Grub Legacy or Grub 2, but if both your Fedora and Backtrack installations use Grub 2 you can (from my answer on the linked question): [131086870050] |
  • Boot into Backtrack
  • [131086870060] |
  • Mount Fedora somewhere, say /mnt/fedora
  • [131086870070] |
  • grub-setup -d /mnt/fedora /dev/sda
  • [131086880010] |How can I share a shell with my colleague w/o using VNC? [131086880020] |Many a times, I want my colleague to have a look at some code on my system. [131086880030] |He will mostly do it at his free time, and will need to login as me. [131086880040] |Is there a way I can open a new shell as me, and then transfer the shell to him on his machine, so that he can use it whenever he wants to? [131086890010] |I would consider using screen to do this. [131086890020] |Although, the only method I know of does produce some security concerns. [131086890030] |Screen has the ability to create access control lists and the ability to allow multiple screen sessions with a variety of permissions. [131086890040] |The setup can be a bit tricky, but the idea is this: [131086890050] |
  • Create a user account for your coworker.
  • [131086890060] |
  • Give your coworker ssh access to your machine.
  • [131086890070] |
  • Make the screen executable setuid root (dangerous).
  • [131086890080] |
  • Change the permissions on /var/run/screen to 755 (Other permissions setups might be doable, this is just what I've always done. [131086890090] |Also, this is the path on Debian, I'm unsure if it is different elsewhere)
  • [131086890100] |
  • Edit your ~/.screenrc to enable multiuser mode: [131086890110] |
  • Edit your ~/.screenrc to set up the right permissions using the commands: acladd, aclchg, and aclgrp. [131086890120] |See the man pages for the details.
  • [131086890130] |Your coworker could then log into your machine via ssh and connect to your screen session. [131086890140] |Via the ssh config, you could actually force him to connect to the screen session upon his logging in. [131086890150] |The following blog post has more detailed instructions (these directions are roughly based off of them) in the context of holding a class using screen: [131086890160] |http://blog.dustinkirkland.com/2009/04/teaching-class-with-gnu-screen.html [131086890170] |Your use case is a bit different, but I think that the only real difference will be the permissions you set in ~/.screenrc and the name of the user. [131086900010] |screen -x ought to be the simplest solution. [131086910010] |How can I find out what happened to my Debian box? [131086910020] |I have an old PC lying around which I've installed Debian 6.0 on. [131086910030] |Last night I was trying to SSH in and it wouldn't respond so I pressed the reset button. [131086910040] |How can I find out what happened to it? [131086910050] |It seems fine now. [131086920010] |Reading the .1 logs is always a good place to start. [131086920020] |Use iptraf to see if your machine makes any suspisious connections (if someone got/has unauthorized access). [131086920030] |Run a rkhunter scan: aptitude install rkhunter rkhunter --update rkhunter --check [131086920040] |And should it ever happen again, attach a monitor and see what the console says :) [131086930010] |Assuming your computer is usually stable, check for hardware problems, especially with the RAM (i.e. install memtest86+ and choose memtest at the boot prompt), but also with disks (disk errors sometimes crash the filesystem code; install smartmontools and run smartctl -a /dev/sda). [131086930020] |If the problem was gradual, you may find something in the kernel logs (/var/log/kern.log), but often the crash happens too brutally for anything to be written to the logs. [131086940010] |By this - [131086940020] |How can I find out what happened to it? [131086940030] |I presume you want to know what happened during your failed attempt at SSH! [131086940040] |One place to look into will be /var/log. [131086940050] |Something like grep -ir ssh /var/log/* should give you the SSH related log entries. [131086950010] |How to best encrypt and decrypt a directory via the command line or script? [131086950020] |I have a directory of text files under bazaar version control and keep a copy (a branch, actually) on each of my machines. [131086950030] |I want to encrypt and unencrypt the directory via the command line. [131086950040] |Ideally, I would also be able to have a script run at logout to check if the directory is encrypted and encrypt it if not, all without user intervention. [131086950050] |I do not, however, want the dir to be decrypted on login. [131086950060] |(I want the script as a guard against forgetting to encrypt manually. [131086950070] |This is especially important for the netbook.) [131086950080] |I'm running ubuntu 10.04.1 and two versions of crunchbang linux, one a derivative of ubuntu 9.04, the either of a late June snapshot of the Debian Squeeze repos. [131086950090] |What is the best way to do this? [131086950100] |(I tried to tag with encryption and directories, but lack the rep to create a tag.) [131086960010] |How about using gpgdir? [131086960020] |This should be scriptable for login and logout. [131086960030] |You can also select subdirectories which are supposed to be encrypted (you may want file such as .bash_rc to remain decrypted, for example). [131086960040] |Another alternative may be Truecrypt (missing rep. does not allow a link): You can create a container for your data and encrypt/decrypt it via shell scripts. [131086970010] |Do you have administrative access to the machines? [131086970020] |One could use an encrypted loopback device. [131086970030] |Example: [131086970040] |make a container file for the encrypted fs: dd if=/dev/zero of=container bs=1024k count=100 [131086970050] |bind container file to loopback device 0: losetup container /dev/loop0 [131086970060] |create encrypted device (-y asks for passphrase twice): cryptsetup -c serpent-xts-essiv:sha256 -b 512 -y create container /dev/loop0 [131086970070] |create ext2 filesystem on encrypted device (can use anything really): mkfs.ext2 /dev/mapper/container [131086970080] |mounts encrypted filesystem to crypt directory: mount /dev/mapper/container crypt [131086970090] |-- [131086970100] |man cryptsetup &&man losetup [131086970110] |Also, read up on cryptography best practises, for information on choosing cipher and key lengths to use etc. [131086980010] |You could also use ecryptfs, which is standard on Ubuntu and its derived distributions. [131086980020] |That's what is used when the install process asks you if you want to crypt your home directory (http://www.linuxjournal.com/article/9400). [131086980030] |The advantage of ecryptfs is that you don't need a separate partition, or a loopback mounted file to use it. [131086990010] |It looks like what you're after is not a way to encrypt and decrypt directories, but a way to work with encrypted storage transparently. [131086990020] |Note that the scheme you propose, with actual mass decryption and encryption, is not very secure: it leaves things unencrypted if you don't log out normally (power failure, system crash, stolen laptop...); and it leaves traces of your confidential data that a determined attacker could find (the data from erased files is still on the disk, just hard to find). [131086990030] |Current Linux systems offer several ways to achieve transparent encryption. [131086990040] |You can encrypt a whole volume with dm-crypt or one of its alternatives. [131086990050] |There are several tools available to encrypt a specific directory tree, including ecryptfs (which works at the kernel level) and encfs (which works purely in userland via fuse). [131086990060] |(The three I mention are available in Debian lenny and should be offered by all of your distributions.) [131086990070] |You can set up the encrypted directories to be mounted when you log in either via PAM (libpam-mount package; recommended option for ecryptfs) or through your profile scripts (recommended option for encfs). [131086990080] |Note that there is no problem with “forgetting to encrypt manually” since nothing is ever written unencrypted to the disk. [131086990090] |For best protection, you should encrypt not just your confidential files, but also other places where confidential data may be stored by programs. [131086990100] |At least, you should encrypt your swap partition. [131086990110] |Other places to watch include /tmp (best solved by making it tmpfs), /var/spool/cups if you print confidential documents, and per-application files in your home directory such web caches/histories (e.g. ~/.mozilla). [131087000010] |bridged networking with kvm [131087000020] |I'm trying to get a guest virtual machine connected to my network using bridging. [131087000030] |I've come across a couple of resources online, but they seem to be out of date, deal with xen or Ubuntu or don't seem to be complete. [131087000040] |The host is running CentOS 5.5 and I'm using libvirt to manage the VMs so I use it to create the VMs and start and stop them. [131087000050] |I have the bridge created (br0) and have attached eth0 to it. [131087000060] |The VM doesn't seem to get an IP address, I want to use DHCP for addresses, I'll setup a static lease for the VM. [131087000070] |ifconfig from the host: [131087000080] |The output of brctl show [131087000090] |Output from route: [131087000100] |Finally, here's the networking section of the vm I'm trying to configure: [131087010010] |KVM sets up its own bridge. [131087010020] |This is the bridge virbr0. [131087010030] |You should be able to configure how this is networked. [131087010040] |On the VM the interface should show up at eth0 not a bridge. [131087010050] |This will be the other side of the vnet0 device. [131087010060] |I work on Ubuntu where KVM will startup a DNSMasq server for the bridged network to hand out DHCP addresses. [131087010070] |KVM will also play with iptables to configure access to the network for your VM. [131087010080] |Try removing the bridge you created and restarting the VM. [131087010090] |I would expect it to get an address in the 192.168.122.0 range from what I see of your configuration. [131087010100] |I didn't like how KVM was interacting with my firewall, so did my own manual networking for KVM. [131087010110] |My configuration uses a virtual bridge which isn't connected to an Ethernet interface. [131087010120] |The KVM Networking page from the Ubuntu community may help you understand how KVM is doing networking now. [131087010130] |EDIT: I took a second look at the bridged networking. [131087010140] |I am not sure why you have an 192.168.1.x address on eth1. [131087010150] |You configuration looks pretty much as I would expect. [131087010160] |Try setting a static address on the VM to see if it can communicate. [131087010170] |To test to see what is happening with DHCP, I would try running tcpdump on br0 or eth0 watching for DHCP traffic, or any traffic from mac address 54:52:00:1a:c8:4f. [131087010180] |Then try to get a DHCP address. [131087010190] |You may need to enable SPT on the bridge. [131087010200] |The reason I did my own networking was to enable access to my VMs from the outside. [131087010210] |I run two bridges, one of which hosts my DMZ. [131087020010] |Why do /usr and /tmp directories for Linux miss vowels in their spellings? [131087020020] |I have often started to think about this but never found a good answer. [131087020030] |Why are these two Unix directories not /user and /temp instead? [131087020040] |All the other directories under root seem to be exactly what one would guess them to be, but these two seem odd, I would have always guessed them as user and temp. [131087020050] |Is there some historical reason for the spellings? [131087030010] |Yup there were reasons. [131087030020] |They are pronounced user and temp. passwd is similar, as is resolv.conf. [131087030030] |Unix is an expert friendly, user antagonistic operating system. [131087030040] |I was a student when 300 Baud modems were the norm. [131087030050] |I was the envy of my fellow students, since I had a Silent 700 terminal from Control Data where I was working. [131087030060] |You could see the delay from typing each character and waiting for it to be echoed. [131087030070] |Every character counted; I also see it as fostering the start of leet speak. [131087030080] |The hjkl from vi have a history which few know. vi was developed by Bill Joy when he was a grad student at UCB during these same years. [131087030090] |The ADM 3a terminals in Cory Hall had arrow keys above those letters [131087040010] |They are holdover from Unix. [131087040020] |Memory and disk space was in short supply. [131087040030] |Hacking out a few vowels and other abbreviations gave real savings. [131087040040] |A few disk blocks or a few bytes could mean the difference in being able to run a program or not. [131087040050] |(I once had to trim a program by 24 bytes before it would run.) [131087040060] |Also as Tom noted terminal speeds were slow. [131087040070] |1200 baud was introduced as high speed and it was. [131087040080] |I worked with one system that used a half speed teletype (55 baud or 5 cps) as the console. [131087040090] |On systems running graphical interfaces it generally doesn't matter that much as the average users won't be poking around in them. [131087040100] |The directories are usually well documented in the hier man page. [131087040110] |Changing over to longer names would cause a lot of problems for existing programs. [131087040120] |It would also limit script portability. [131087040130] |Linking multiple names to the same directories would likely be more confusing than helpful. [131087040140] |EDIT: PDP-7 on which Unix was developed had a base configuration of 4KW of memory and a maximum of 32KW. [131087040150] |Words where 18 bits wide. [131087040160] |Input was a teletype, so speed was likely 110 baud or 10cps, roughly 100 words which is significantly slower than speech. [131087050010] |I'm surprised that nobody has commented on /user yet. [131087050020] |This one is obvious: because it's not "user", it's "Universal System Resources". [131087050030] |As for /tmp, it's simply shorter and easier to type. [131087050040] |C programmers have a habit of using short names when possible because they end up typing them many times in a program. [131087050050] |It is very common have a temporary variable named "tmp" rather than "temp" for the same reason, you use "i" i for the index in a for loop instead of "index" or "counter". [131087060010] |All the other directories under root seem to be exactly what one would guess them to be, [131087060020] |There is also /var, /mnt and /opt ;) [131087060030] |but these two seem odd, I would have always guessed them as user and temp. [131087060040] |Almost there. [131087060050] |As Shawn said, "user" stands for "Universal System Resources" (though other resources according to teh google indicates it stands for "Unix System Resources"). [131087060060] |Is there some historical reason for the spellings? [131087060070] |Short cuts, abbreviations. [131087060080] |Remember that commands in any operating system are meant for accessing both interactively and programmatically. [131087060090] |In particular for systems administration where fast scripting is one primary concern, abbreviations, mnemonics are as good (or even better) than the full spelled word/command. [131087060100] |Also, back in the day, if you were connecting remotely through a slow-as-molasses modem, shaving a couple of vowels here and there would make your life easier (or less miserable if you were a sysadmin trying to find out what the hell is wrong with a remote box.) [131087060110] |As said before, it is not unique to /usr and /tmp (see /var, /mnt and /opt). [131087060120] |Also, it is not unique to Unix. [131087060130] |Take DOS for example (chkdsk, for example.) Mnemonics where you shave off vowels are a powerful, handy concept. [131087060140] |Even in natural languages (like Semitic languages) the concept exist (where root of words are universally and almost unambiguously identified by 3-consonant groups.) [131087060150] |It is an innate human mechanism for managing information. [131087070010] |Why $JAVA_HOME does not persist on a mac? [131087070020] |On my mac os 10.6.6 I'm trying to persist env variable $JAVA_HOME but it doesn't stick! [131087070030] |Once I restart it won't be set anymore. [131087070040] |The GUI way to do that is to use the Property List Editor as documented by Apple and on SO. [131087070050] |However, after a restart: [131087080010] |Running the 'export' command in a shell only persists it for the duration of the session. [131087080020] |Save the export command in ~/.bashrc (if your shell is bash). [131087080030] |This way it's executed every time you start a new shell session. [131087090010] |For more information on how to set JAVA_HOME in Mac OSX, there is an existing post http://stackoverflow.com/questions/603785/environment-variables-in-mac-os-x [131087100010] |As mentioned by others, export only applies to the current shell and programs started from it after it is used. [131087100020] |(Note that open relays its command to the Finder, so programs started that way don't get environment variables from the shell it's run in.) [131087100030] |One way to set environment variables persistently is to add to ~/.bash_profile or ~/.bashrc (the former is preferred, as otherwise subshells will override the export if you change it for some reason, say because you need a different JRE for some particular Java program). [131087100040] |Another is to set them in ~/.MacOSX/environment.plist; this is the only way to set environment variables so that the Finder will see them. [131087100050] |I prefer to use the Environment Variable Preference Pane to manage ~/.MacOSX/environment.plist. [131087100060] |You can also edit it by hand (watch out; it's XML). [131087100070] |You will have to log out or reboot to get Finder to reread it after changing it. [131087110010] |How can I tell if my system is keeping the system time up to date? [131087110020] |How can I tell if my Debian system is keeping the system time accurate by getting NTP updates? [131087110030] |Basically I want to turn this on if it is currently off, but I don't know if it is on or off. [131087120010] |Running [131087120020] |ps ax | grep ntpd [131087120030] |and checking that the output contains something like [131087120040] |will confirm that ntpd is running. [131087120050] |If it's not running then you can start it with [131087120060] |/etc/init.d/ntp start [131087120070] |If you get an error message No such file or directory then you will have to install the ntp package [131087120080] |sudo apt-get install ntp [131087120090] |Once you have ntpd running you can talk to it with the ntpq command. [131087120100] |Which shows (offset) that my system is <1 second out of sync - I can live with that. [131087130010] |If you have peer statistics enabled in your /etc/ntp.conf then you have statistics in /var/log/ntpstats/peerstats. [131087130020] |(Directory and file name will be specified in ntp.conf). [131087130030] |You can scan it to see how well you are tracking your servers. [131087130040] |The command grep -v 127.127.1.0 /var/log/ntpstats/peerstats will output all the lines except those for your local clock. [131087130050] |The first floating point number is the offset in seconds. [131087130060] |The closer it is to zero the better. [131087130070] |There should be a mix of positive and negative values. [131087130080] |Use zgrep to look a historical data in the rotated logs with a .gz extension. [131087130090] |To see what the values are use ntpq -p as Iain suggested. [131087130100] |If you run Munin to monitor your system it can track you ntp statistcs for you. [131087130110] |I believe the offset it records is the value relative to the currently synchronization source. [131087130120] |That is the one on the line starting with an asterisk (*) in the ntpq -p output. [131087130130] |Munin can be configured to notify your offset is too large. [131087130140] |My warning lines are as follows (times in milliseconds): [131087140010] |A cheap and dirty way to check the local clock vs another machine is this shell command sequence: [131087140020] |"somehost" has to run the RFC 867 "daytime" protocol, and that's not so common anymore. inetd can provide "daytime" by itself, and some hosts still have "daytime" enabled. [131087140030] |You can at get an independent check on the local clock, no use of NTP necessary. [131087150010] |Sharing bandwidth between IPs [131087150020] |There are 200 users in my network, and these users are in two AD groups and cannot set specific IP ranges for each group. [131087150030] |I want to set, for each group, the amount of bandwidth that can be shared between users of groups. [131087150040] |I can manage bandwidth for a IP with squid, and also with iptables+tc, but I want to set bandwidth for groups of IPs that share between users of that. [131087150050] |How can I do that? [131087160010] |I can answer how to manage the bandwidth once you know what IP is in which group. [131087160020] |You can use hierarchical token bucket to allocate three groups. [131087160030] |
  • 10 Group A
  • [131087160040] |
  • 20 Group B
  • [131087160050] |
  • 30 Unknown traffic for/from non-AD devices in your network
  • [131087160060] |In the script above the groups can lend bandwidth to other groups if they don't use it. [131087160070] |If you don't want this set ceil to the same value as rate. [131087160080] |Now you can write a script that dynamically sends IPs to class 1:10 or 1:20. [131087160090] |Probably you have to hook some dhcp- or ad-events. [131087160100] |The script for group A can look like this: [131087160110] |Remember you can only control what you send. [131087160120] |So if your router has interface eth0 and eth1, you have to manage bandwidth on eth1, too. [131087160130] |And consider to attach SFQ leaf QDISC to the classes. [131087160140] |SFQ is just great! [131087160150] |Mapping IPs to groups [131087160160] |Finding out what IP is in which group highly depends on the software you use. [131087160170] |If your software doesn't support events you might write a script that parses the log and decides to allocate a IP on a certain group. [131087170010] |What is the difference between Halt and Shutdown commands? [131087170020] |What is the difference between the Halt and Shutdown commands? [131087170030] |Thanks. [131087180010] |I suspect this is somewhat dependant on which version of UNIX/Linux you are using. [131087180020] |On Centos (and I expec other modern Linux) halt calls shutdown (providing you're not at runlevel 0 or 6) so your system will be shutdown cleanly. [131087180030] |On Solaris 10 halt is more brutal, it just flushes the disk caches and powers off the system - no attempt is made to run any scripts or shutdown smf facilities. [131087190010] |In linux, "halt" and "reboot" are aliases of the shutdown command -- shutdown -h and shutdown -r respectively. [131087190020] |Bareword shutdown generally assumes -h. [131087200010] |Generally, one uses the shutdown command. [131087200020] |It allows a time delay and warning message before shutdown or reboot, which is important for system administration of multiuser shell servers; it can provide the users with advance notice of the downtime. [131087200030] |As such, the shutdown command has to be used like this to halt/switch off the computer immediately (on Linux and FreeBSD at least): [131087200040] |Or to reboot it with a custom, 30 minute advance warning: [131087200050] |After the delay, shutdown tells init to change to runlevel 0 (halt) or 6 (reboot). [131087200060] |(Note that omitting -h or -r will cause the system to go into single-user mode (runlevel 1), which kills most system processes but does not actually halt the system; it still allows the administrator to remain logged in as root.) [131087200070] |Once system processes have been killed and filesystems have been unmounted, the system halts/powers off or reboots automatically. [131087200080] |This is done using the halt or reboot command, which syncs changes to disks and then performs the actual halt/power off or reboot. [131087200090] |On Linux, if halt or reboot is run when the system has not already started the shutdown process, it will invoke the shutdown command automatically rather than directly performing its intended action. [131087200100] |However, on systems such as FreeBSD, these commands first log the action in wtmp and then will immediately perform the halt/reboot themselves, without first killing processes or unmounting filesystems. [131087210010] |Keyboard behaving strangely on Debian - Dualboot on Macbook [131087210020] |Yesterday I installed Debian 6 on my Macbook, with dualboot. [131087210030] |Everything is working fine, except for the keyboard. [131087210040] |As I'm typing, I see the mouse arrow moving a bit and strange things happen, such as text under the arrow being highlighted or clicked. [131087210050] |Other things as Right-Click, selecting text and other mouse-related events also happen. [131087210060] |It's being really hard to type like this. [131087210070] |Anyone have any ideas of what might be the cause and how I can fix this? [131087210080] |Thanks. [131087220010] |Some sort of X.org memory leak using nvidia proprietary driver [131087220020] |Hey, [131087220030] |could please anybody help me identify what is the source of problem ? the problem is that after some time or after opening a window of some application in X, it seems like I suddenly lost 2d acceleration. [131087220040] |Because the process /usr/bin/X and all its operations ( window resizing, scrolling) suddenly eats 60% to 150% of CPUs runtime when scrolling window for instance. [131087220050] |I'm using twinview, but it happens even without it, but not so much. [131087220060] |I'm not using any composition in KDE or xfce [131087220070] |the nvidia driver installation logs looks there was no problem during the installation [131087220080] |I tried various different kernel versions, 2 different Xorg version, 2 versions of nvidia driver and even different window managers [131087220090] |it only happens in particular time frames, if I open some specific www page or application in browser or it comes from nowhere. [131087220100] |It is possible that it could be triggered by some flash animation, gif animation, javascript scrolling text or other effects in browser, but it is really hard to say, because it usually appears suddenly (when reading something in browser for instance) [131087220110] |as far as xorg.conf, I have the default one from nvidia-settings. [131087220120] |I also tried various possible configurations and modules. [131087220130] |I have glxgears 2.241 FPS and after the problem is gone I have 6267.623 FPS [131087230010] |Is there a utility that interprets /proc/interrupts data in time ? [131087230020] |Hey, [131087230030] |is there something out there ? [131087230040] |Like top is for ps [131087250010] |the slash (/) after a directory name on shell commands [131087250020] |Hello, I have a little question here. [131087250030] |If I have two files, say filea and fileb, mv filea fileb would [131087250040] |
  • delete fileb
  • [131087250050] |
  • rename filea to fileb
  • [131087250060] |Then if I have two directories, say dira and dirb, mv dira dirb would [131087250070] |
  • move dira into dirb (it will become dirb/dira)
  • [131087250080] |Noting that in both cases there are no notice or message, then this is pretty inconsistent to me. [131087250090] |I think mv dira dirb should just overwrite dirb with the contents of dira (or merge the two directories under a directory named dirb). [131087250100] |I remember reading somewhere that a directory name with a slash (like dira/) is treated like a directory, and name with no slash (like dira) is treated like a file (to certain extents, of course). [131087250110] |Anyway now I want to make the shell (zsh and possibly bash) respect my notation of a directory by using a slash. [131087250120] |Is there a terminal option which enable me to enforce that? [131087250130] |To clarify, here is my desired behaviour: [131087250140] |
  • mv dira dirb results in dirb being overwritten with the contents of dira
  • [131087250150] |
  • mv dira dirb/ results in dira being moved into dirb (in dirb/dira)
  • [131087250160] |Has anyone thought the same way as me? [131087250170] |Or am I just weird? [131087260010] |Yes, this is a bit inconsistent, even within the GNU tools. [131087260020] |One problem with your proposal is that non-empty directories cannot be removed. [131087260030] |Apart from that, -T (in GNU land, anyway) approximates what you want (the first case): [131087270010] |If you just want to move everything from dira to dirb you can run mv dira/* dirb. [131087270020] |As per mv man page on my HP-UX. [131087270030] |The mv command moves: [131087270040] |+ One or more directory subtrees (directory1, [directory2, ...]) to a new or existing directory (dest-directory). [131087280010] |How to start an xterm from Nautilus, running in the folder I'm viewing? [131087280020] |This was easy in Thunar, but now I'm on a machine where I'm trying to do everything in the Gnome desktop world. [131087280030] |In Nautilus, I'm in some deep-down folder. [131087280040] |How do I open an xterm (rxvt, aterm, gnome's terminal app, whichever) with its working directory already set to the location Nautilus is showing? [131087290010] |There is a very useful Nautilus extension called nautilus-open-terminal that does just what you asked. [131087290020] |You should find it in the standard repositories. [131087290030] |Once installed you should have a "Open in terminal" entry in the File menu. [131087300010] |This tag is about windows as a GUI element. [131087300020] |Use windows for questions about interoperability with the Windows operating system. [131087300030] |See also desktop-environment, window-manager, x11, xorg. [131087310010] |The window is a fundamental concept in graphical user interfaces [131087320010] |Many unix variants provide more fine-grained file permissions than the traditional, standard triple of permissions for the owning user, the owning group and others. [131087320020] |These permissions are expressed through access control lists. [131087320030] |The permissions that can be expressed through ACLs vary between unix variants. [131087320040] |

    Related tags

    [131087320050] |
  • permissions
  • [131087320060] |
  • security
  • [131087330010] |ACL stands for access control list. [131087330020] |ACLs extend permissions on files beyond the traditional user-group-others triple [131087340010] |What's the difference between SFTP, SCP and FISH protocols? [131087340020] |I've used to think SCP is a tool to copy files over SSH and copying files over SSH is called SFTP which is itself a synonym to FISH. [131087340030] |But now as I was looking for a Total Commander plugin to do this in Windows, I've noticed that on its page its said "Alows to access remote servers via secure FTP (FTP via SSH). [131087340040] |Requires SSH2. [131087340050] |This is NOT the same as SCP!". [131087340060] |So, if its not, then what am I missunderstanding? [131087350010] |SFTP isn't the FTP protocol over ssh, but an extension to the SSH protocol included in SSH2 (and some SSH1 implementations). [131087350020] |SFTP is a file transfer protocol similar to FTP but uses the SSH protocol as the network protocol (and benefits from leaving SSH to handle the authentication and encryption). [131087350030] |SCP is only for transferring files, and can't do other things like list remote directories or removing files, which SFTP does do. [131087350040] |FISH appears to be yet another protocol that can use either SSH or RSH to transfer files. [131087360010] |Put it simple: [131087370010] |debian security /etc permissions [131087370020] |I'm setting up a debian box with shared webhhosts. [131087370030] |These users don't have ssh permissions, just ftp. [131087370040] |The users are allowed to use PHP and I setup suphp for that so the php processes runs under their own user account, etc. [131087370050] |I'm a little bit worried about the security of the system files, especially the /etc folder. [131087370060] |I notice that most files in this directory have permissions like: [131087370070] |Are the read-world permissions which debian standard gives the files in /etc really needed? [131087370080] |What's the best mask I can give those files? [131087370090] |Are there any files in /etc that should be world readable? [131087380010] |The default permissions are fine, and needed. [131087380020] |If you e.g. didn't leave passwd world readable, a lot of user-related functionality would stop working. [131087380030] |File such as /etc/shadow shouldn't be (and aren't) world readable. [131087380040] |Trust the OS to get this right, unless you know very well that the OS is wrong. [131087390010] |The passwd needs to be world readable so that a few tools can work correctly. [131087390020] |Despite its name, the passwords are not stored there, they are stored in the /etc/shadow file which should have the permissions -rw-------. [131087390030] |The passwd- file is likely a backup. [131087390040] |All other "files" are directories and contain configuration files. [131087400010] |Nearly all the configuration file needs to be world readable, how do you expect your applications to read them otherwise ? [131087400020] |If you're really that paranoid, you can however create a groups for each application, put the needed users in them and change group owner and permission for the related configurations file. [131087400030] |But I think this would cause a lot more harm than good. [131087400040] |The only important file I can think of which don't have world readable permission is /etc/shadow like stated in other comments. [131087400050] |If you want a secure Debian box, I suggest the securing Debian Howto it's a little bit old, but it gives a good overview. [131087400060] |There is also the harden package which create some interesting dependencies and forbid installation of known vulnerable packages. [131087410010] |Let's take a step back: If those users only need access to their home directories, most FTP servers have some config setting that only allows access to that directory, and nowhere else (most commonly by using chroot). [131087410020] |For example, in ProFTPd, it's the DefaultRoot directive: [131087410030] |http://www.proftpd.org/docs/faq/linked/faq-ch5.html#AEN524 [131087420010] |Everything seams fine except the phpmyadmin directory. [131087420020] |Be really careful to protect files so the mysql password do not leak. [131087430010] |sed or tr one-liner to delete all numeric digits [131087430020] |So, I have this textfile, and it consists of mostly alphanumeric characters. [131087430030] |It's a standard document. [131087430040] |But since I copied it and pasted it from a PDF, there are page numbers in there. [131087430050] |I don't much care for the occasional number that's not a page, so I figure I'll wipe them all out with sed or tr. [131087430060] |Just marginally faster than find and replacing first zero, then one, then two, etc. in the GUI, after all. [131087430070] |So how do I do that? [131087440010] |I believe what you are looking for is: [131087450010] |To remove all digits, here are a few possibilities: [131087450020] |If you just want to get rid of the page numbers, there's probably a better regexp you can use, to recognize just those digits that are page numbers. [131087450030] |For example, if the page numbers are always alone on a line except for whitespace, the following command will delete just the lines containing nothing but a number surrounded by whitespace: [131087450040] |You don't need to use the command line for this, though. [131087450050] |Any halfway decent editor has regexp search and replacement capabilities. [131087460010] |How can I get wireless and ethernet to work with ubuntu? [131087460020] |I have an acer aspire 7551 laptop running windows 7 and I've successfully used wubi to also install ubuntu (the latest release version). [131087460030] |Wireless works fine on windows 7, but neither wireless nor ethernet works on ubuntu. [131087460040] |In particular, the wireless icon shows up at the top-right (but no available connections show up when I click the icon). [131087460050] |When I connect my ethernet cable (which works fine when I plug it into my PC) to my laptop when it's running ubuntu and click the wireless icon, auto-etho shows up as an option but when I click it, it works for a few seconds and then disconnects. [131087460060] |(Note - ethernet also doesn't work in windows 7 even though all the latest drivers are installed) [131087460070] |How can I get wireless and ethernet to work under ubuntu? [131087460080] |Additional Info [131087460090] |Ubuntu version: "Ubuntu 10.0 - the Maverick Meerkat" [131087460100] |lspci output: [131087460110] |ifconfig -a output: [131087470010] |I have the same network hardware in my laptop running Ubuntu 10.10 Maverick. [131087470020] |For the wireless adapter, you need the binary Broadcom STA proprietary drivers. [131087470030] |Ubuntu should prompt you to install them when you first start, but if you're lacking a network connection, that might be why it's not working. [131087470040] |Fortunately, the stuff you need is on the 10.10 installation disk. [131087470050] |Here are the simplest gui steps: [131087470060] |1) insert the disk, and navigate to it in the file browser (nautilus) [131087470070] |2) navigate into the folder called pool, and then go into main, and then d. Install dkms_2.1.1.2-3ubuntu1_all.deb from the dmks folder, by double-clicking on it. [131087470080] |3) install /pool/main/p/patch/patch_2.6-2ubuntu1_amd64.deb by the same process [131087470090] |4) install /pool/main/f/fakeroot/fakeroot_1.14.4-1ubuntu1_amd64.deb [131087470100] |5) finally, install /pool/restricted/b/bcmwl/bcmwl-kernel-source_5.60.48.36+bdcom-0ubuntu5_amd64.deb [131087470110] |if you restart, you should (fingers crossed!) be okay now. [131087470120] |The wired ethernet not working is odd - never seen that. [131087470130] |If it doesn't work under Windows either, I'd suggest a hardware problem is likely there. [131087470140] |edit: the deb filenames above are for the 64bit version. [131087470150] |For the i386 ones, just replace _amd64 with _i386. [131087470160] |You'll find the files you're looking for :) [131087480010] |Printing conjunct unicode characters using single keystroke [131087480020] |I want to print a conjunct unicode characters (which do not have dedicated unicode value assigned to it, but which can be print using the combination of unicode characters) using a single key stroke by modifying the keyboard layout in Linux. [131087480030] |I am modifying the /usr/share/X11/xkb/symbol/in file to modify the layout. [131087480040] |Let me know if anything is possible. [131087490010] |I'm not sure of what do you want to do, but, if it may help I use xim and a custom $HOME/.XCompose (on a per user configuration basis) to remap custom key (two chars sequence mapped to a unique key, composition rules for dead key). [131087500010] |no network device found after Kernel update [131087500020] |So I managed to install Debian 5 on an eBox 3300MX (an embedded type computer). [131087500030] |I added a custom kernel after getting a basic install done. [131087500040] |That was to get the network card to work. [131087500050] |After that, I finished with installing, since I was doing a net install, and it worked fine. [131087500060] |However, since then, I am unable to get the Debian to use the network card again. [131087500070] |The only thing I can think of is that the Kernel updated during the install. [131087500080] |I tried booting into the previous working kernel, and that did not work either. [131087500090] |I get the error "No network devices found." [131087500100] |However, I can open Network Tools and view devices, and I see the network card, with its MAC address and other info. [131087500110] |I have it listed in etc/network/interfaces as auto. [131087500120] |Thus, I don't understand why it does not want to work. [131087500130] |I'm not really sure what other info is relevant, so please comment and let me know. [131087510010] |Try out this guide from the debian forum. [131087510020] |Looks like the driver from the manufacturer held low quality and was therefor thrown out at some point. [131087520010] |It turns out that during the first update I did (apt-get install ntp), that ifupdown was removed. [131087520020] |Since I did not immediately reboot, I was able to continue with the net install following this. [131087520030] |I discovered this by completely installing Debian again, and carefully looking through the operations during that update (well, and the help of someone who knows much more about Linux than I do). [131087520040] |Anyways, all it took to get going again was apt-get install ifupdown and now everything is good. [131087530010] |Monitor Blinking [131087530020] |I was a Debian user, but for some reason, after I installed Arch Linux, my monitor started to blink. [131087530030] |It happens even on text terminals (C-F1 etc), but only after gdm starts. [131087530040] |Any idea on what could be happening and how to fix it? [131087530050] |I installed very few programs: vim, xorg, xf86-video-ati, gdm, xorg, fluxbox, vim, sudo and chromium. [131087530060] |I installed xf86-video-ati based on the output of lspci: [131087540010] |What is SSH - the protocol and what is ssh - the utility? [131087540020] |What is SSH - the protocol? [131087540030] |What is ssh - the unix utility and how does it work? [131087540040] |How is SSH protocol related to SFTP? [131087540050] |What is sshd? [131087540060] |Does the command su use ssh or sshd? [131087550010] |Take a look at the OpenSSH project. [131087550020] |It has all the info you're looking for. [131087550030] |Briefly, the SSH protocol permits the secure (encrypted) connection between two hosts. [131087550040] |The ssh utility is a client program to log into a remote system using the SSH protocol, and it has a lot of other uses, too, like [reverse] tunneling/port forwarding/... [131087550050] |sshd it's the server software. [131087550060] |It provides a daemon which responds to incoming SSH requests. [131087550070] |su has nothing to do with ssh. [131087550080] |It's used to change the active user (the most frequent use it's to become root). [131087560010] |SSH (stands for "Secure SHell") is a network protocol which described in RFC4251. ssh utility is SSH client that connects to SSH daemon and presents "Secure SHell" to user. [131087560020] |SFTP is FTP-like protocol which works over SSH connection. [131087560030] |su command does not use ssh or sshd in any way, it just allows you to run processes with different privileges. [131087570010] |SSH is a protocol for secure communication over an insecure network. [131087570020] |It allows for end to end encryption of all communication such that it cannot (feasibly) be intercepted and decrytped. [131087570030] |ssh the utility is an implementation of the protocol. [131087570040] |SFTP is a subsystem of ssh that uses the protocol for secure password and file transfer. [131087570050] |su does not use the ssh protocol. [131087580010] |The SSH protocol is defined by what the ssh and sshd programs accept. [131087580020] |(There is a standard defined for it, but it's an after-the-fact thing and is mostly ignored when one of the implementations adds new features.) [131087580030] |Since there are multiple implementations of those (OpenSSH, F-Secure, PuTTY, etc.) occasionally you'll find that one of them doesn't support the same protocol as the others. [131087580040] |Basically, it defines authentication negotiation and creation of a multiplexed data stream. [131087580050] |This stream can carry one or more (with OpenSSH and ControlMaster) terminal sessions and zero or more tunnels (forwarding socket connections from either local or remote to the other side; X11 forwarding is a special case of remote forwarding). [131087580060] |It also defines "subsystems" that can be used over the stream; terminal sessions are the basic subsystem but others can be defined. sftp is one of these. [131087580070] |ssh the utility uses the SSH protocol to talk to sshd on another machine. [131087580080] |How it works depends on what version it is (see above), but the gist of it is that it attempts to figure out which version of the SSH protocol to use, then it and sshd negotiate supported authentication methods, then it tries to authenticate you using one of those methods (asking for remote user password/private key paasword/S-Key phrase as necessary), and on successful authentication sets up a multiplexed stream with the sshd. [131087580090] |sshd, as said above, implements the server side of the SSH protocol. [131087580100] |sftp is a (at present, the only standard) subsystem defined in most sshd implementations. [131087580110] |When the SFTP subsystem is requested, sshd connects sftp-server to the subsystem session; the sftp program then talks to it, similarly to ftp but with file transfers multiplexed on the stream instead of using separate connections as with ftp. [131087580120] |su has nothing to do with ssh, sshd, or sftp, except insofar as there may be PAM modules to arrange for the multiplexed stream to be available within the shell or program run by it. [131087590010] |rsyslog is not discarding message as it should [131087590020] |I have setup rsyslog to write messages from local0.* to a seperate logfile, and then discard the messages. [131087590030] |For some reason, the logs are going to both /var/log/syslog, and the new logfile. [131087590040] |I have put the config in rsyslog.d/30-local0.conf, which as far as i know, should come before the 50-default.conf, and so the message should be discarded before hitting the standard rules and being written to syslog? [131087600010] |Sounds correct so far, we may need more data (your actual config files). [131087600020] |Just to be sure, you should have this in 30-local0.conf: [131087610010] |Bash regex matching not working in 4.1 [131087610020] |Upgraded to Bash4 and found that it is not matching regexes: [131087610030] |But Bash 3.0 is: [131087610040] |Why might this be? [131087610050] |Have I not installed it correctly? [131087620010] |Check this answer on SO. [131087620020] |Since you are using 3.00 version of bash 3, it might regard your problem. [131087620030] |Shortly, starting from 3.2 version, quoting the string argument to the [[ command's =~ operator forces string matching, so the correct pattern for bash 4 should be: [131087630010] |Getting all files that have been modified on a specific date [131087630020] |Is it possible to find all php files within a certain directory that have been modified on a certain date [131087630030] |I'm using [131087630040] |to get files modified within the last 28 days, but I only need files that have been modified on the following date 2011-02-08 [131087640010] |On recent versions of find (e.g. GNU 4.4.0) you can use the -newermt option. [131087640020] |For example, to find all files that have been modified on the 2011-02-08 [131087640030] |Also note that you don't need to pipe into grep to find php files because find can do that for you in the -name option. [131087640040] |Take a look at this SO answer for more suggestions: How to use 'find' to search for files created on a specific date? [131087650010] |You have almost the right command already, for versions of find that won't let you use dates: [131087650020] |In general, for find -n means fewer than, n means equal, +n means "more than". [131087650030] |Traditional find has some exceptions, but GNU find and other newer versions such as on BSD/Mac OS X corrected those. [131087650040] |(Keep this in mind if you ever find yourself on a Solaris or other commercial Unix system.) [131087660010] |Annoyingly, there isn't any direct way with standard find. [131087660020] |Recent versions of find on GNU systems (e.g. non-embedded Linux, Cygwin) and some *BSDs have options such as -newermt to compare a file date with a spelled-out date. [131087660030] |With standard find, all you can do is compare the file date with the current date (-mtime) or with a fixed file. [131087660040] |The current date is usually not useful in this case (it counts back from the time you run the find command, whereas most applications require a calendar date). [131087660050] |That leaves you with the kludge of creating temporary files to define a range. [131087670010] |Best practice to use $? in bash? [131087670020] |Hi [131087670030] |When I read this answer about $? another question comes to mind. [131087670040] |Is there any best practice for how to use $? in bash? [131087670050] |Let's have a example: [131087670060] |We have a linear script and I we would like to know that all the command was executed ok. [131087670070] |Do you think it is ok to call a small function (let's call it "did_it_work"), to check the error code and break if it's not. [131087670080] |This approach of course means that I have to manually solve the problem if there is any and rerun the script. [131087670090] |Do you think this is a good idea or is there some other best practice to do this? [131087670100] |/Thanks [131087680010] |You could rewrite your code like this: [131087680020] |If you don't actually need to log the error code, but just whether the command succeeded or not, you can shorten try() further like so: [131087690010] |If you really want to exit on an error and are using Bash, then you should also consider set -e. [131087690020] |From help set: [131087690030] |-e Exit immediately if a command exits with a non-zero status. [131087690040] |This of course doesn't give you the flexibility of a did_it_work() function, but it is an easy way to make sure your bash script stops on an error without adding lots of calls to your new function. [131087700010] |One common way is: [131087700020] |then you use it like this: [131087700030] |Or if you want it to include the exit status, you could change it to: [131087700040] |and then using it is a bit easier: [131087700050] |Just in case you haven't seen command1 || command2 before, it runs command1, and if command1 fails, it runs command2. [131087700060] |So you can read it like "make the directory or die". [131087700070] |Your example would look like: [131087700080] |Or you can align the dies further on the right so that the main code is more obvious. [131087700090] |Also, if you are going to use the name some/path multiple times, store it in a variable so you don't have to keep typing it, and can easily change it if you need to. [131087700100] |And if you plan to fix the problem and re-run the script, maybe you want the script to work if the directory already exists, so you don't have to remove it first, in which case, you would want [131087710010] |Looking at logs from bottom upwards [131087710020] |Is there a linux command or some way to look at logs from bottom up rather than from top towards bottom. [131087710030] |I know about tail -n , but is there something that I can actually scroll and go from bottom up? [131087720010] |The tac command is like a reverse "cat", if that's what you're after. [131087720020] |If you just want to view a file by starting at the bottom, why don't you use an editor or "less"? [131087730010] |Some systems have tac, which is a whimsically-named backward cat. [131087730020] |Without that, you can still do something like [131087740010] |I think less +G is what you're looking for — it'll load up the file at the end, and you can use the up arrow (or k) to scroll back. [131087740020] |Within the viewer, g will take you to the top of the file and G will take you back to the end. [131087740030] |And F will scroll to the end and try to keep reading, like tail -f. [131087750010] |If you are writing a nagios check the perl module File::ReadBackwards is useful [131087760010] |You can run less and then use M-> (that's the meta key, usually alt, and the '>' at the same time) to go to the bottom of the file. less supports scrolling. [131087770010] |Lexmark S305 scanner / printer [131087770020] |I have bought a Lexmar Impact S305 scanner / printer. [131087770030] |There was the small penguin and the word "Linux" among supported systems on the box. [131087770040] |The problem is the official drivers are only for Debian based and RPM based distros. [131087770050] |I haven't found unofficial drivers. [131087770060] |There is graphic installer. [131087770070] |It fails win my distro (Arch Linux), however I've installed it on virtual machine with Mint Debian and the printer works there. [131087770080] |I've extracted some files (so, ppd, bin) from installer too. [131087770090] |My question. [131087770100] |What do I need to set up my printer? [131087770110] |It looks ppd alone is not enough. [131087770120] |Update 08-03-2011 [131087770130] |I've extracted scripts from deb file. [131087770140] |There are 3 files: control, postinst (17k) and prerm (4,2k). [131087770150] |I enter a new shell. [131087770160] |And it is the end. [131087770170] |Update 13-03-2011 [131087770180] |The content of line 70-82: [131087780010] |Have a look inside the DEB file(s), and navigate into DEBIAN directory. [131087780020] |There you'll find out what dpkg would do when installing the package, and try to replicate those steps manually. [131087780030] |They are shell scripts. [131087790010] |It would help if you posted all the scripts involved, but I'll hazard a guess. [131087790020] |Those are bash scripts, but they are run by /bin/sh, which is dash and not bash on your system. [131087790030] |Change any #!/bin/sh line at the top of the scripts to #!/bin/bash, and change the explicit invocations of /bin/sh into /bin/bash as well. [131087790040] |The immediate source of the error on line 73 is that $username is not set, so the [ command sees the operands == and root (plus the final ]). [131087790050] |This is a syntax error. [131087790060] |It's impossible to know why the variable isn't set without seeing more of the script. [131087790070] |(Beware that the small extract from the scripts you've included in your post shows that the author doesn't have a lot of experience writing unix shell scripts. [131087790080] |From what I've seen elsewhere, this often applies to the rest of the driver. [131087790090] |Open-source drivers shipped in Linux distributions tend to be much better quality than manufacturer-provided drivers. [131087790100] |Unfortunately, it looks like you have no choice with this model.)