[131079770010] |passwordless ssh for another username? [131079770020] |I want to do a passwordless ssh connection for a subversion project. [131079770030] |Right now I'm using ssh+svn, and it's a little annoying, because I have to type the password any time I want to transact with the server. [131079770040] |I've found several tutorials on the web about how to generate a key for passwordless ssh, but they all seem to assume that I'm using the same username on the remote system as I am for my home system. [131079770050] |However, the username that I use for ssh+svn is different that the user account name on the system that I'm running. [131079770060] |How do I set this up properly? [131079770070] |I haven't had luck with just changing the name in the key file. [131079780010] |You just have to supply the other system's username in the svn command: [131079780020] |To answer your question's title, too: [131079780030] |This causes sshd on the remote machine to look in ~otheruser/.ssh/authorized_keys for the public key corresponding to the private key on the machine you're typing the command on. [131079790010] |There are two ways to do this: [131079790020] |1) put user@ into the svn url ; this tells svn+ssh to login as that user. [131079790030] |I think it's kind of a bad idea from a maintenance perspective because things like externals that point at other parts of the repository won't work correctly. [131079790040] |2) make a ~/.ssh/config (documented as ssh_config) that says something like: [131079790050] |this way any attempt to ssh to othersystem will default to using otheruser. [131079790060] |Which is handy for you when do ssh manually as well as when you're using svn. [131079800010] |You don't have to have the same username on both mashines. [131079800020] |As long as you generate the key (ssh-keygen) you have to copy line from ~/.ssh/id_rsa.pub or ~/.ssh/id_dsa.pub (depending on type of key) from local server and append it to ~/.ssh/authorized_keys on remote. [131079800030] |If you don't want to type remoteuser each time append to ~/.ssh/config: [131079800040] |PS. [131079800050] |The name of key may be in form of localuser@localhost but it is only a name. [131079800060] |It can be just as well myfavouritekey@myfavouritecomputer and noone would care. [131079810010] |Reserved space for root on a filesystem - why? [131079810020] |Hi, [131079810030] |I understand that by default, newly created filesystems will be created with 5% of the space allocated for root. [131079810040] |I also know you can change the defined space with: [131079810050] |What I'm curious about though, is what the actual purpose for this reserved space is. Does it serve any practical purpose which would merit more than 5% space in some circumstances? [131079810060] |The reason I've stumbled upon this question is that we recently built a 1TB filestore, and couldn't quite figure out why a df -h left us missing 5% of our capacity :) [131079810070] |Much appreciated, Foxed [131079820010] |If you allow others to log on to your system, via ssh, for example, having these 5% blocks reserved ensures external users cannot fill the disk. [131079820020] |Even if you don't allow others to log in to your system, the reserved blocks prevents programs not running as root from filling your disk. [131079830010] |With the blocks reserved your users, and services that are running as specific users rather than running as root, can not fill a filesystsem and potentially break other things that need to write to said filesystem - though services running as root still can still make it completely full of course. [131079830020] |It also give you some space to work with when users complain that the disk is full, or services start failing because the filesystem is full. [131079830030] |For instance you could archive some files off into zip/gz/7zip archives before removing them (though if the filesystem were completely full, chances are you have some other filesystem available that you could create the archive file in instead). [131079830040] |5% has been the default for a long time, from back when disks were far smaller (tens of megabytes rather then hundreds of gigabytes) so 5% wasn't all that much. [131079830050] |Luckily it can easily be tuned down to a smaller percentage as you say, or set to a specific number of blocks if you use tune2fs's -r option instead of -m. [131079830060] |In both cases you can give a parameter of 0 to turn the reservation off completely - I wouldn't do this for /, /tmp, /var and so forth, but you might want to for a filesystem that only acts as user storage (say a global file-share) or one that just holds fixed size files (like fixed sized VMs) that will only grow when you create a new one. [131079840010] |Saving space for important root processes (and possible rescue actions) is one reason. [131079840020] |But there's another. [131079840030] |Ext3 is pretty good at avoiding filesystem fragmentation, but once you get above about 95% full, that behavior falls off the cliff, and suddenly filesystem performance becomes a mess. [131079840040] |So leaving 5% reserved gives you a buffer against this. [131079840050] |Ext4 should be better at this, as explained by Linux filesystem developer/guru Theodore Ts'o: [131079840060] |If you set the reserved block count to zero, it won't affect performance much except if you run for long periods of time (with lots of file creates and deletes) while the filesystem is almost full (i.e., say above 95%), at which point you'll be subject to fragmentation problems. [131079840070] |Ext4's multi-block allocator is much more fragmentation resistant, because it tries much harder to find contiguous blocks, so even if you don't enable the other ext4 features, you'll see better results simply mounting an ext3 filesystem using ext4 before the filesystem gets completely full. [131079840080] |If you are just using the filesystem for long-term archive, where files aren't changing very often (i.e., a huge mp3 or video store), it obviously won't matter. [131079850010] |The default of 5% is meant for system partitions. [131079850020] |For example, if your disk space fills up, the system logs (/var/log) and root's mailbox (/var/mail/root) can still receive important information. [131079850030] |For a /home or general data storage partition, there's no need to leave any space for root. [131079850040] |For special needs, you can change the user that gets the emergency space. [131079850050] |There's another reason to not allow an ext[234] filesystem to get full, which is fragmentation. [131079860010] |Escaping quotes in zsh alias [131079860020] |Following on from this question about stripping newlines out of text, I want to turn this into a zsh alias as follows: [131079860030] |I've tried escaping the quotes inside the awk script, but I'm getting this error: [131079860040] |(The file is called bspsrobustness) [131079860050] |Is there a way to do what I want? [131079860060] |I suppose I could turn this into an awk script rather than a zsh alias, is that my best option? [131079870010] |You could use a zsh function instead of an alias. [131079870020] |No quoting hoops to jump through. [131079880010] |To get an idea of what's going on, run [131079880020] |Since the $ characters are in double quotes (when they're expanded after the alias is expanded), they are interpreted by the shell. [131079880030] |To get the quoting right, it's easier to put the whole alias definition in single quotes. [131079880040] |What's inside the single quotes is what will be expanded when the alias is used. [131079880050] |Now that the argument of awk is surrounded in double quotes, it's clear that you need backslashes before \"$. [131079880060] |A useful idiom to single-quote a single-quoted string is that '\'' is pretty much a way to put a literal single quote in a single-quoted string. [131079880070] |Technically there's a juxtaposition of a single-quoted string, a backslash-quoted ', and another single-quoted string. [131079880080] |The juxtaposed empty string '' at the end can be removed. [131079880090] |After this long explanation, a recommendation: when it's too complicated for an alias, use a function. [131079890010] |Background task finished notification syntax [131079890020] |Last night, before abandoning my computer for the evening, I started a bunch of compiler jobs so they'd be ready in the morning, using make -f alpha.mak &>alpha.out &. [131079890030] |When I came back and hit return, I saw the following output: [131079890040] |My question: What do the + and - symbols mean in that output? [131079890050] |I'm using bash on RedHat 6. [131079900010] |According to the Bash Reference Manual: Job Control: [131079900020] |In output pertaining to jobs (e.g., the output of the jobs command), the current job is always flagged with a +', and the previous job with a-'. [131079910010] |Video Watermarking [131079910020] |I am looking for a solution in which I can add watermark to the given video at given frame. [131079910030] |Is there any utility/tool to do so ? [131079920010] |mencoder can definitely do it, although the folks I know tend to use one of the free Windows apps (so I'm guessing you could run one of them under Wine) [131079920020] |Also bmovl should be able to do it. [131079920030] |Check out this thread. [131079930010] |If you want to do this programatically mencoder is the tool for the job. [131079930020] |You'll need a static image (or generate one programatialy with ImageMagic, gd, etc) and some time to play with filters. [131079940010] |How to change kernel boot options in GRUB 2 [131079940020] |In old GRUB, if I wanted to change boot options, I would use a line starting with the word kernel in "/boot/grub/menu.lst": [131079940030] |Now since I'm using GRUB 2, I looked under "/etc/grub.d/10_Linux" for something similar, but I'm not even sure that's the right place to do this. [131079940040] |If it is, please guide me. [131079940050] |If it's not, guide me anyways. [131079950010] |Edit the file /etc/default/grub and set your parameter in GRUB_CMDLINE_LINUX. [131079950020] |For example I have: [131079950030] |I learned this from Ubuntu documentation. [131079950040] |Other sources are either too hard to find, or I didn't dig deep enough. [131079960010] |Origin of 'root' account [131079960020] |What's the origin of root account? [131079960030] |Where did it come from and why is it called root anyway? [131079960040] |(Originally asked by @lizztheblizz on Twitter.) [131079970010] |According to Linux Info, the origin of the name may have come from file system layout / permissions: [131079970020] |The use of the term root for the all-powerful administrative user may have arisen from the fact that root is the only account having write permissions (i.e., permission to modify files) in the root directory. [131079970030] |The root directory, in turn, takes its name from the fact that the filesystems (i.e., the entire hierarchy of directories that is used to organize files) in Unix-like operating systems have been designed with a tree-like (although inverted) structure in which all directories branch off from a single directory that is analogous to the root of a tree. [131079980010] |The original home directory of the root user was the root of the filesystem '/' (http://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/etc/passwd). [131079980020] |I think the user was indeed named after that directory. [131079980030] |But why 'root' and not 'start' or 'origin' or something else? [131079980040] |Well, before Ken Thompson and Dennis Ritchie wrote UNIX, they were (also at Bell Labs) developing Multics. [131079980050] |If you take a look at Multics history, you will find that ROOT existed there too (http://web.mit.edu/multics-history/source/Multics_Internet_Server/Multics_mdds.html). [131079980060] |So the name must come from Multics. [131079990010] |Security through obfuscation -- nobody would guess the superuser account was named root! [131080000010] |how to connect kvm guest vm to internet with wireless? [131080000020] |Because my wireless network adapter does not support bridging, it is really difficult to get the guest vm connect to the open internet and have an IP address in the same network as the host. [131080000030] |I am using ubuntu 10.10, and the kvm version does not support vde, so it becomes even more difficult. [131080000040] |How can I fix that? [131080000050] |@stribika's idea is great, but I would prefer to let my VMs connect to the same network as my host. e.g. my host network is 192.168.1.0, and I want my VM's network to be the same. [131080010010] |You should be able to use the user-mode networking stack. [131080010020] |Start qemu like this: [131080010030] |The important options: [131080010040] |
  • -net nic: Show a virtual network card for the guest
  • [131080010050] |
  • -net user: Make the qemu process on the host communicate over the real network just like any other process would
  • [131080010060] |
  • net=10.0.0.0/8: The subnet on the virtual network
  • [131080010070] |
  • host=10.0.0.1: The host IP address on the virtual network
  • [131080010080] |
  • hostfwd=tcp:127.0.0.1:2222-10.0.0.2:22: The qemu process on the host listens for TCP connections from localhost on port 2222 and forwards them to the virtual network to 10.0.0.2:22 (so you can ssh to your new virtual machine)
  • [131080010090] |On the guest run [131080010100] |Test SSH from host to guest [131080010110] |and from guest to host [131080010120] |Test the internet reachability from the guest [131080010130] |The host process works like a NAT router. [131080010140] |Only TCP and UDP traffic will work. [131080010150] |In particular ping only works between the guest and the host you can't ping google.com (my usual network testing method). [131080010160] |The advantage of this approach is that you don't even need root privileges. [131080020010] |For more information, see the screen man page. [131080030010] |Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells). [131080040010] |Understanding the linux kernel source [131080040020] |I am trying to understand how a function, say mkdir works by looking at the kernel source. [131080040030] |This is an attempt to understand the kernel internals and navigate between various functions. [131080040040] |I know mkdir is defined in sys/stat.h. [131080040050] |I found the prototype like, [131080040060] |Now I need to see on which C file this function is implemented. [131080040070] |From the source directory, I tried [131080040080] |which displayed [131080040090] |But none of them matches the definition in sys/stat.h. [131080040100] |Questions [131080040110] |
  • Which file has mkdir implementation?
  • [131080040120] |
  • With a function definition like the above, how can I find out which file has the implementation? [131080040130] |Is there any pattern which kernel follows in defining and implementing methods?
  • [131080040140] |I am using this version of kernel. [131080050010] |None of the implementations you found matches the prototype in sys/stat.h Maybe searching for an include statement with this header file would be more successful? [131080060010] |System calls aren't handled like regular function calls. [131080060020] |It takes special code to make the transition from user space to kernel space, basically a bit of inline assembly code injected into your program at the call site. [131080060030] |The kernel side code that "catches" the system call is also low-level stuff you probably don't need to understand deeply, at least at first. [131080060040] |In include/linux/syscalls.h under your kernel source directory, you find this: [131080060050] |Then in /usr/include/asm*/unistd.h, you find this: [131080060060] |This code is saying mkdir(2) is system call #83. [131080060070] |That is to say, system calls are called by number, not by address as with normal functions, because it's not really a function in the way you understand it. [131080060080] |The inline assembly glue code I mentioned above uses this to make the transition from user to kernel space, taking your parameters along with it. [131080060090] |Another bit of evidence that things are a little weird here is that there isn't always a strict parameter list for system calls: open(2), for instance, can take either 2 or 3 parameters, a trick C++ knows how to do, but C doesn't, yet the syscall interface is nominally C-compatible. [131080060100] |To answer your first question, there is no single file where mkdir() exists. [131080060110] |Linux supports many different file systems and each one has its own implementation of the "mkdir" operation. [131080060120] |The abstraction layer that lets the kernel hide all that behind a single system call is called the VFS. [131080060130] |So, you probably want to start digging in fs/namei.c, with vfs_mkdir(). [131080060140] |The actual implementations of the low-level file system modifying code are elsewhere. [131080060150] |For instance, the ext3 implementation is called ext3_mkdir(), defined in fs/ext3/namei.c. [131080060160] |As for your second question, yes there are patterns to all this, but not a single rule. [131080060170] |What you actually need is a fairly broad understanding of how the kernel works in order to figure out where you should look for any particular system call. [131080060180] |Not all system calls involve the VFS, so their kernel-side call chains don't all start in fs/namei.c. mmap(2), for instance, starts in mm/mmap.c, because it's part of the memory management ("mm") subsystem of the kernel. [131080060190] |I recommend you get a copy of "Understanding the Linux Kernel" by Bovet and Cesati. [131080070010] |This probably doesn't answer your question directly, but I've found strace to be really cool when trying to understand the underlying system calls, in action, that are made for even the simplest shell commands. e.g. [131080070020] |The system calls for the command mkdir mynewdir will be dumped to trace.txt for your viewing pleasure. [131080080010] |System calls are usually wrapped in the SYSCALL_DEFINEx() macro, which is why a simple grep doesn't find them: [131080080020] |fs/namei.c:SYSCALL_DEFINE2(mkdir, const char __user *, pathname, int, mode) [131080080030] |The final function name after the macro is expanded ends up being "sys_mkdir". [131080080040] |The SYSCALL_DEFINEx() macro adds boilerplate like tracing code that each syscall definition needs to have. [131080090010] |Note: the .h file doesn't define the function. [131080090020] |It's declared in that .h file and defined (implemented) elsewhere. [131080090030] |This allows the compiler to include information about the function's signature (prototype) to allow type checking of arguments and match the return types to any calling contexts in your code. [131080090040] |In general .h (header) files in C are used to declare functions and define macros. [131080090050] |mkdir in particular is a system call. [131080090060] |There may be a GNU libc wrapper around that system call (almost certainly is, in fact). [131080090070] |The true kernel implementation of mkdir can be found by searching the kernel sources and the system calls in particular. [131080090080] |Note that there will also be an implementation of some sort of directory creation code for each filesystem. [131080090090] |The VFS (virtual filesystem) layer provides a common API which the system call layer can call into. [131080090100] |Every filesystem must register functions for the VFS layer to call into. [131080090110] |This allows different filesystems to implement their own semantics for how directories are structured (for example if they are stored using some sort of hashing scheme to make searching for specific entries more efficient). [131080090120] |I mention this because you're likely to trip over these filesystem specific directory creation functions if you're searching the Linux kernel source tree. [131080100010] |A good place to read the Linux kernel source is the Linux cross-reference (LXR). [131080100020] |Searches return typed matches (functions prototypes, variable declarations, etc.) in addition to free text search results, so it's handier than a mere grep (and faster too). [131080100030] |LXR doesn't expand preprocessor definitions. [131080100040] |System calls have their name mangled by the preprocessor all over the place. [131080100050] |However, most (all?) system calls are defined with one of the SYSCALL_DEFINEx families of macros. [131080100060] |Since mkdir takes two arguments, a search for SYSCALL_DEFINE2(mkdir leads to the declaration of the mkdir syscall: [131080100070] |ok, sys_mkdirat means it's the mkdirat syscall, so clicking on it only leads you to the declaration in include/linux/syscalls.h, but the definition is just above. [131080100080] |The main job of mkdirat is to call vfs_mkdir (VFS is the generic filesystem layer). [131080100090] |Cliking on that shows two search results: the declaration in include/linux/fs.h, and the definition a few lines above. [131080100100] |The main job of vfs_mkdir is to call the filesystem-specific implementation: dir->i_op->mkdir. [131080100110] |To find how this is implemented, you need to turn to the implementation of the individual filesystem, and there's no hard-and-fast rule — it could even be a module outside the kernel tree. [131080110010] |Here are a couple really great blog posts describing various techniques for hunting down low-level kernel source code. [131080110020] |
  • http://hostilefork.com/2010/03/14/where-the-printf-rubber-meets-the-road
  • [131080110030] |
  • http://sysadvent.blogspot.com/2010/12/day-15-down-ls-rabbit-hole.html
  • [131080120010] |The "root" account is the most privileged account on a Unix system. [131080120020] |This account gives you the ability to carry out all facets of system administration, including adding accounts, changing user passwords, examining log files, installing software, etc. [131080120030] |When using this account it is crucial to be as careful as possible. [131080120040] |The "root" account has no security restrictions imposed upon it. [131080120050] |This means it is easy to perform administrative duties without hassle. [131080120060] |However, the system assumes you know what you are doing, and will do exactly what you request -- no questions asked. [131080120070] |Therefore it is easy, with a mistyped command, to wipe out crucial system files. [131080120080] |When you are signed in as, or acting as "root", the shell prompt displays '#' as the last character (if you are using bash). [131080120090] |This is to serve as a warning to you of the absolute power of this account. [131080120100] |The rule of thumb is, never sign in as "root" unless absolutely necessary. [131080120110] |While "root", type commands carefully and double-check them before pressing return. [131080120120] |Sign off from the "root" account as soon as you have accomplished the task you signed on for. [131080120130] |Finally, (as with any account but especially important with this one), keep the password secure! [131080130010] |The "root" account is the most privileged account on a Unix system [131080140010] |For more information, see http://tmux.sourceforge.net/ [131080150010] |tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached [131080170010] |Secure Copy or SCP is a means of securely transferring computer files between a local and a remote host or between two remote hosts [131080180010] |From Wikipedia: [131080180020] |The term "memory" is used for the information in physical systems which are fast (i.e. RAM), as a distinction from physical systems which are slow to access (i.e. data storage). [131080180030] |By design, the term "memory" refers to temporary state devices, whereas the term "storage" is reserved for permanent data. [131080180040] |Advances in storage technology have blurred the distinction a bit —memory kept on what is conventionally a storage system is called "virtual memory". [131080190010] |In computing, memory refers to the state information of a computing system, as it is kept active in some physical structure. [131080210010] |A Graphical User Interface (or GUI) provides an image-based system to allow a user to interact with a computing device [131080230010] |A computer network allows sharing of resources and information among interconnected devices [131080240010] |According to the GNOME Website: [131080240020] |The GNOME project provides two things: The GNOME desktop environment, an intuitive and attractive desktop for users, and the GNOME development platform, an extensive framework for building applications that integrate into the rest of the desktop. [131080250010] |GNOME: The Free Software Desktop Projec [131080270010] |Fedora is a Red Hat Community Project providing a free, fast, stable, and powerful operating system for everyday use. [131080280010] |Shell scripting is often considered a simple domain-specific programming language. [131080280020] |Typical operations performed by shell scripts include file manipulation, program execution, and printing text. [131080290010] |A shell script is a script written for the shell, or command line interpreter, of an operating system. [131080300010] |Handling Signals [131080300020] |Each Unix signal has a default set of effects on a Unix program. [131080300030] |Programmers can code their applications to respond in customized ways to most signals. [131080300040] |These custom pieces of code are called signal handlers. [131080300050] |Two signals are unable to be redefined by a signal handler. [131080300060] |SIGKILL always stops a process and SIGSTOP always moves a process from the foreground to the background. [131080300070] |These two signals cannot be "caught" by a signal handler. [131080310010] |A signal is a message which can be sent to a running process. [131080310020] |Signals can be initiated by programs, users, or administrators [131080330010] |Installing is the process of deploying a software package to a computer [131080340010] |Mac OS X is a series of Unix-based operating systems and graphical user interfaces developed, marketed, and sold by Apple Inc. [131080350010] |Which bash will expand {1..$VAR} in the same way that zsh does. [131080350020] |In response to a comment of mine to this question on SF the OP asserts that the [131080350030] |expands correctly in bash. [131080350040] |I have access to bash 4.0.33 (Ubuntu), 3.2.25 (Centos) and 3.00.16(1) (solaris 10). [131080350050] |None of these will expand the {1..$NUM}. [131080350060] |Does anyone know which versions of bash do the expansion? [131080350070] |If it's not bash what is it ? [131080350080] |I know zsh will do the expansion but in the OP's script the shebang should remove the possibility of an alias ? [131080360010] |Brace expansion does not expand variables, because the brace expansion is the very first step of the shell expansion and the variables are expanded later. [131080360020] |From the bash man page: [131080360030] |Brace expansion is performed before any other expansions, and any characters special to other expansions are preserved in the result. [131080360040] |It is strictly textual. [131080360050] |Bash does not apply any syntactic interpretation to the context of the expansion or the text between the braces. [131080360060] |So: [131080360070] |prints [131080370010] |To expand this you'd have to use eval: [131080370020] |Be very, very careful with eval though! [131080370030] |Make sure you check that it's really a number before you execute something like this if you're accepting user input. [131080370040] |As my colleague observed: "eval is evil!" [131080380010] |I'm pretty sure the person asking the question on server fault is wrong. [131080380020] |They're either leaving out some detail intentionally or are unaware of some weird configuration on the machine (/bin/bash as a link to something else). [131080380030] |I tried on the two-week-old bash 4.2 release, on bash 3.2 from CentOS 5, and on bash 2.05a (from 2001) which I built myself just now to try. [131080380040] |The oldest source on ftp.gnu.org is 1.14, but that doesn't build cleanly, and I'm not sure trying it is worthwhile. [131080380050] |The documentation for that release contains the exact same key phrases as in the modern documentation: [131080380060] |But actually, now that I think of it, that's pointless, since the "{x..y}" syntax was added in bash 3.0, making looking before that useless. [131080380070] |So, let me try 3.0 ... right. [131080380080] |Same behavior. [131080380090] |So yeah. [131080380100] |I'm extremely skeptical. [131080380110] |Something doesn't add up. [131080380120] |(Later...) [131080380130] |Aha! [131080380140] |The server-fault questioner admits "Sorry. my mistake. [131080380150] |I've put for actual number in i in {1..10}, so it worked well." [131080380160] |So, there ya go. [131080380170] |The answer to this question is "no version of bash works like that". :) [131080390010] |Recommended reading to better understand unix/linux internals [131080390020] |I've worked on *nix environments for the last four years as a application developer (mostly in C). [131080390030] |Please suggest some books/blogs etc. for improving my *nix internals knowledge. [131080400010] |O'REILLY Linux Kernel in a Nutshell and O'REILLY Linux Device Drivers [131080410010] |
  • The Unix Time-sharing System (10 pages) -- the original UNIX article by UNIX authors Ken Thompson and Dennis Ritchie, published back in 1974
  • [131080410020] |
  • Design of the Unix operating system -- the classic!
  • [131080410030] |
  • Lion's Commentary on UNIX Kernel source code and the corresponding source code itself
  • [131080420010] |You definitely want to read Advanced Programming in the Unix Environment by Stevens. [131080420020] |Don't let the Advanced title scare you away, its very readable. [131080430010] |Linux Systems Programming or any other book by Robert Love (these are all O'Reilly books): [131080430020] |http://oreilly.com/catalog/9780596009588 [131080440010] |Books/sites/manuals that i am using frequently: [131080440020] |
  • The Linux Kernel: This book published online as a part of TLDP (The Linux Documentation Project). [131080440030] |It is not up-to-date and not an internal manual, but provides useful information and introductory materials about principles and mechanisms of the kernel. [131080440040] |URL: http://tldp.org/LDP/tlk/
  • [131080440050] |
  • Understanding Linux Kernel: IMHO, It is the best book for beginners who has background about the operating systems' design and concept. [131080440060] |It is accepted as up-to-date, covers version 2.6 of the kernel. [131080440070] |There is an HTML version of the book on the web. [131080440080] |Most probably it is warez. [131080440090] |URL: http://book.opensourceproject.org.cn/kernel/kernel3rd/index.html
  • [131080440100] |While studying linux kernel internal, you usually need to learn how hardware works and what hardware provides in abstract manner. [131080440110] |Intel has great manuals for this. [131080440120] |
  • Intel 64 and IA-32 Architectures Software Developer's Manuals: Up-to-date, detailed information. [131080440130] |URL: http://www.intel.com/products/processor/manuals/
  • [131080440140] |
  • Intel 80386 Programmer's Reference Manual: I know this is a little bit old but i've learned so many things from this manual. [131080440150] |URL: http://www.logix.cz/michal/doc/i386/
  • [131080440160] |If you need to study about operating systems' desing and concept I suggest following book: Operating System Concepts, URL: http://www.amazon.com/Operating-System-Concepts-Abraham-Silberschatz/dp/0470128720/ref=dp_cp_ob_b_title_1#reader_0470128720 [131080450010] |To get a sense of the why and what the kernel is meant to support, have a look at The Art of Unix Programming by Eric Raymond. [131080450020] |It takes things at a fairly high, philosophical level, but it would go well with the nitty-gritty details of other books. [131080460010] |Here are some suggestions on how to understand the "spirit" of Unix, in addition to the fine recommendations that have been done in the previous posts: [131080460020] |
  • "The Unix Programming Environment" by Kernighan and Pike: an old book, but it shows the essence of the Unix environment. [131080460030] |It will also help you become an effective shell user.
  • [131080460040] |
  • "Unix for the Impatient" is a useful resource to learn to navigate the Unix environment. [131080460050] |One of my favorites.
  • [131080460060] |If you want to become a power user, there is nothing better than O'Reilly's "Unix Power Tools" which consists of the collective tips and tricks from Unix professionals. [131080460070] |Another book that I have not seen mentioned that is a fun light and education reading is the "Operating Systems, Design and Implementation", the book from Andy Tanenbaum that included the source code for a complete Unix operating system in 12k lines of code. [131080470010] |I agree with all the others and I have to say that Stevens' APUE (I have the second edition) is a classic. [131080470020] |I would also like to add that Eric Raymond's The Art of UNIX Programming ranks right up there with Stevens on my list. [131080480010] |Well, for BSD Unices, there's The Design and Implementation of the 4.4BSD Operating System, parts of which are now apparently available for free at http://www.freebsd.org/doc/en/books/design-44bsd/ [131080490010] |Linux Device Drivers is another good resource. [131080490020] |It would give you another way to get into the inner workings. [131080490030] |From the preface: [131080490040] |This is, on the surface, a book about writing device drivers for the Linux system. [131080490050] |That is a worthy goal, of course; the flow of new hardware products is not likely to slow down anytime soon, and somebody is going to have to make all those new gadgets work with Linux. [131080490060] |But this book is also about how the Linux kernel works and how to adapt its workings to your needs or interests. [131080490070] |Linux is an open system; with this book, we hope, it is more open and accessible to a larger community of developers. [131080510010] |From the CentOS web site: [131080510020] |CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor. [131080510030] |CentOS conforms fully with the upstream vendors redistribution policy and aims to be 100% binary compatible. [131080510040] |(CentOS mainly changes packages to remove upstream vendor branding and artwork.) [131080510050] |CentOS is free. [131080510060] |CentOS is developed by a small but growing team of core developers. [131080510070] |In turn the core developers are supported by an active user community including system administrators, network administrators, enterprise users, managers, core Linux contributors and Linux enthusiasts from around the world. [131080510080] |CentOS has numerous advantages over some of the other clone projects including: an active and growing user community, quickly rebuilt, tested, and QA'ed errata packages, an extensive mirror network, developers who are contactable and responsive, multiple free support avenues including IRC Chat, Mailing Lists, Forums, a dynamic FAQ. [131080510090] |Commercial support is offered via a number of vendors. [131080520010] |CentOS is an Enterprise-class Linux Distribution derived from Redhat Enterprise Linux sources freely provided to the public [131080540010] |A file system structure in which to store computer files [131080550010] |yum is an interactive, rpm based, package manager. [131080550020] |It can automatically perform system updates, including dependency analysis and obsolete processing based on "repository" metadata. [131080550030] |It can also perform installation of new packages, removal of old packages and perform queries on the installed and/or available packages among many other commands/services (see below). yum is similar to other high level package managers like apt-get and smart. [131080560010] |Yellowdog Updater Modified (yum) is an interactive, rpm based, package manager [131080570010] |Text editors are often provided with operating systems or software development packages, and can be used to change configuration files and programming language source code. [131080570020] |There are many popular text editors for Unix, including vi / vim, emacs, joe, kate, pico, etc. [131080580010] |A text editor is a type of program used for editing plain text files [131080590010] |FreeBSD® is an advanced operating system for modern server, desktop, and embedded computer platforms. [131080590020] |FreeBSD's code base has undergone over thirty years of continuous development, improvement, and optimization. [131080590030] |It is developed and maintained by a large team of individuals. [131080590040] |FreeBSD provides advanced networking, impressive security features, and world class performance and is used by some of the world's busiest web sites and most pervasive embedded networking and storage devices. [131080600010] |FreeBSD is a free Unix-like operating system descended from A [131080610010] |NetBSD is a free, fast, secure, and highly portable Unix-like Open Source operating system. [131080610020] |It is available for a wide range of platforms, from large-scale servers and powerful desktop systems to handheld and embedded devices. [131080610030] |Its clean design and advanced features make it excellent for use in both production and research environments, and the source code is freely available under a business-friendly license. [131080610040] |NetBSD is developed and supported by a large and vivid international community. [131080610050] |Many applications are readily available through pkgsrc, the NetBSD Packages Collection. [131080620010] |NetBSD is a freely available open source version of Unix-derivative Berkeley Software Distribution (BSD); due to convenient license and portability, NetBSD is often used in embedded systems [131080640010] |In the X window system, a window manager controls the way your desktop works, including how the windows look and act. [131080650010] |According to Wikipedia: [131080650020] |grep is a command line text search utility originally written for Unix. [131080650030] |The name is taken from the first letters in global / regular expression / print, a series of instructions in text editors such as ed.1 A backronym of the unusual name also exists in the form of Generalized Regular Expression Parser. [131080650040] |The grep command searches files or standard input globally for lines matching a given regular expression, and prints them to the program's standard output. [131080650050] |

    History

    [131080650060] |Grep was created by Ken Thompson as a standalone application adapted from the regular expression parser he had written for the ed editor (which he also created). [131080650070] |The name grep comes from the ed editor command it simulated, g/re/p (global regular expression print). [131080650080] |Its official date of creation is given as March 3, 1973 in the Manual for Unix Version 4. [131080660010] |grep is a command-line tool used to search files to find text patterns [131080680010] |Solaris is a Unix operating system originally developed by Sun Microsystems. [131080680020] |It superseded their earlier SunOS in 1992. [131080700010] |The .NET Framework, originally developed by Microsoft [131080720010] |Content related to computer security [131080730010] |Sudo (su "do") allows a system administrator to delegate authority to give certain users (or groups of users) the ability to run some (or all) commands as root or another user while providing an audit trail of the commands and their arguments. [131080740010] |sudo - Execute a command with superuser privileges [131080750010] |Cron is a daemon that runs periodic scheduled jobs. [131080750020] |Use the crontab command to edit the table of scheduled jobs. [131080750030] |Use at to schedule a job for one execution only at a specific date. [131080750040] |

    Common pitfalls

    [131080750050] |If a command works when you type it in a terminal but not from a crontab, here are some common reasons: [131080750060] |
  • Cron provides a limited environment, e.g., a minimal $PATH.
  • [131080750070] |
  • Cron uses /bin/sh, which may not be the shell you normally use.
  • [131080750080] |
  • Cron treats the % character specially (it is turned into a newline in the command).
  • [131080750090] |

    Further reading

    [131080750100] |
  • Why did my crontab not trigger?
  • [131080750110] |
  • Run a script via cron every other week
  • [131080750120] |
  • Where are cron errors logged?
  • [131080760010] |Cron is a job scheduler that allows users to run commands periodically [131080780010] |The X window system (commonly X Window System or X11, based on its current major version being 11) is a computer software system and network protocol that provides a basis for graphical user interfaces (GUI) for networked computers [131080800010] |According to Wikipedia: [131080800020] |iptables is a user space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores. [131080800030] |Different kernel modules and programs are currently used for different protocols; iptables applies to IPv4, ip6tables to IPv6, arptables to ARP, and ebtables for Ethernet frames. [131080800040] |Iptables requires elevated privileges to operate and must be executed by user root, otherwise it fails to function. [131080800050] |On most Linux systems, iptables is installed as /usr/sbin/iptables and documented in its man page [2], which can be opened using man iptables when installed. [131080800060] |It may also be found in /sbin/iptables, but since iptables is not an "essential binary", but more like a service, the preferred location remains /usr/sbin. [131080800070] |iptables is also commonly used to inclusively refer to the kernel-level components. x_tables is the name of the kernel module carrying the shared code portion used by all four modules that also provides the API used for extensions; subsequently, Xtables is more or less used to refer to the entire firewall (v4,v6,arp,eb) architecture. [131080810010] |iptables allow creation of rules to define packet filtering behavior [131080830010] |A device driver or software driver is a computer program allowing higher-level computer programs to interact with a hardware device [131080840010] |Network problems after Ubuntu upgrade [131080840020] |I'm having real problems having upgraded Ubuntu from 10.04 to 10.10. [131080840030] |My main problem is I cannot get a network connection either wired or wireless. [131080840040] |When I right click the Network manager applet the Enable Networking check box is greyed out and unchecked. [131080840050] |Any ideas how to resolve this? [131080840060] |Thanks [131080850010] |How to show the number of installed packages [131080850020] |What is the Debian equivalent of Fedora's yum list installed | grep wc --lines? [131080860010] |According to this thread: [131080860020] |To list installed packages: [131080860030] |To see if a package is installed: [131080870010] |dpkg -l is nice but I actually find myself using apt-show-versions (not installed by default on Debian; install the package of the same name) a lot instead, especially when I want to process the output further (dpkg tries to be too clever with line wrapping). [131080880010] |There are subtle variants like dpkg -l | grep -c '^?i' if you want to include packages that are installed but whose removal you've requested. [131080880020] |Another way is [131080880030] |You can even poke directly into the dpkg database: [131080880040] |This one includes packages that are not installed but that have configuration files left over; you can list these with dpkg -l | grep '^rc'. [131080890010] |Synaptic, a GUI package manager, displays the count at the bottom of its main window. [131080900010] |How to get vodafone mobile connect (or equivalent) working reliably on debian squeeze [131080900020] |I got a Vodafone K3760 at some point in Debian/Lenny's lifetime, for use with a Lenovo S10e. [131080900030] |I installed (from the Vodafone BetaVine site) usb-modeswitch 0.97 and vodafone-mobile-connect (VMC) 2.10.01-1 (or maybe it was actually vodafone-mobile-connect_svn20090615) and it all worked pretty well. [131080900040] |After the squeeze update I have had less luck. [131080900050] |Unlike lenny, squeeze includes usb_modeswitch 1.1.4-2, and I also grabbed vodafone-mobile-connect 2.25.01. [131080900060] |It does actually work... for a few minutes, then the system locks up (or sometimes the gnome panel dies, but xterms on-screen continue to work). [131080900070] |Anyone know what magic combinations of packages, if any, will work reliably with Squeeze ? [131080900080] |(The obvious next thing for me to try is to revert the relevant packages to their older versions; VMC does have quite a lot of dependencies though so I'm not too sure how well this will work). [131080900090] |There's another school of thought which says forget about vodafone's app and just use the new Gnome network manager's mobile broadband support. [131080900100] |I haven't tried this yet, but I kind of like the vodafone app (for it's usage monitoring, and for it's access to SMS messages on the K3760; these are pretty important if you travel abroad and want to see what outrageous charges you'll be stung for if you dare connect). [131080900110] |But if Gnome or other apps provide such functionality, I'd happily drop the Vodafone software. [131080900120] |Thanks for any advice [131080900130] |Just to be clear: the system is rock solid when using wired or built-in wifi connection. [131080900140] |Inserting the dongle seems to be pretty safe; the problems start once connected. [131080900150] |Update 1: I couldn't revert to the older vodafone mobile connect because of various python dependencies not in squeeze. [131080900160] |Reverting to the older usb_modeswitch seems to be possible, but at least this once I got a spew of kernel bug notifications (alas, these don't seem to submit anything useful: example) and got about 30mins uptime before a lockup. [131080900170] |I'm not too sure what to make of dmesg (have got some saved); there are clearly some call stacks into serio_interrupt in i8042_interrupt in ...a pile of handle IRQ functions ... but there are other ones too. [131080900180] |Update 2: I managed to get a dmesg log of some kernel errors from the "all-squeeze" version (not using any modules from the older setup) before lockup. [131080900190] |This can be found here; no idea how to interpret it myself. [131080910010] |Try a younger kernel, perhaps something from "sid" (aka Unstable). [131080920010] |I also asked this question on the Vodafone developer forums; interesting alternative answer there seems to confirm the issue and refers me to VMC's replacement Betavine Connection Manager (BCM), but that solution is untried by me yet. [131080930010] |How to install some packages from "unstable" Debian on a computer running "stable" Debian? [131080930020] |On a computer running "stable" Debian, when trying to install a package which is in the unstable list on the Debian web site using the "aptitude install /unstable" command, I get output similar to this: [131080930030] |0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. [131080930040] |Need to get 0 B of archives. [131080930050] |After unpacking 0 B will be used. [131080930060] |What can I do to be able to install "unstable" packages? [131080930070] |(I thought of adding the repository to sources.list, but I don't want everything to start being installed from "unstable"). [131080930080] |So: how can I install unstable packages (with using "/stable" at the end of the package name)? [131080940010] |You do need to have unstable listed in your sources.list. [131080940020] |Otherwise apt just won't find the package. [131080940030] |To avoid unstable packages being pulled in, you have two ways. [131080940040] |
  • The easy way is to add a Default-Release clause to /etc/apt/apt.conf. [131080940050] |
  • The hard way is to use APT preferences. [131080940060] |In /etc/apt/preferences: [131080940070] |Note that for most of the lifetime of a Debian release, it's not practical to install most packages from unstable on a stable system, because they'll pull in a lot of libraries from unstable, and you'll end up with an unstable system. [131080940080] |First look if there is a backport for them. [131080940090] |Otherwise, if you want to install a package from unstable but not have to pull in its dependencies, try getting the source from unstable and recompiling. [131080950010] |How do I set up dual quadro cards in RHEL 5.5 [131080950020] |have a RHEL 5 workstation with 2 nvidia Quadro FX4500 cards, with one display attached to each card. [131080950030] |After doing a clean install of RHEL 5.5, the second display doesnt work (it worked ok in RHEL 5.2). [131080950040] |Neither separate X screens nor Xinerama are working. [131080950050] |The kernel version is 2.6.18-194.el5 [131080950060] |I've tried nvidia drivers 185.18.36 (the ones that i was using on 5.2) and the latest 260.19.36 and neither works. [131080950070] |My xorg.conf is as follows: [131080950080] |Xorg.0.log (edited) is as follows: [131080950090] |(the snipped part can be changed if necessary) [131080950100] |Any help at all would be appreciated. [131080950110] |Cheers, Alex [131080960010] |Connect alauda driver to an mtd deivce [131080960020] |I have a USB card reader, an Olympus MAUSB-10. [131080960030] |It provides direct flash access to SmartMedia or xD cards, using the Linux alauda driver. [131080960040] |This is different from a typical card reader which just exposes it as a standard USB mass storage device. [131080960050] |There's drivers in the Linux kernel that will do the FTL thing and expose this as a standard block device, but I want direct flash access. [131080960060] |I was wondering if it's possible to use the various utilites of mtd-tools to read, write, and erase directly to it. [131080960070] |So the device is recognized by lsusb, and drivers aluada and nand_ecc are loaded. [131080960080] |But cat /proc/mtd isn't revealing another MTD device available, and I don't see any additional devices in /dev. [131080960090] |How do I create a new mtd device and connect it to the alauda driver? [131080970010] |Disable screen blanking on text console [131080970020] |I'm running linux clusters, mostly on SLES10. [131080970030] |The servers are mostly blades, accessed via remote console. [131080970040] |There is a real console in the server room, but switched off. [131080970050] |I would like to disable the screen blanking as it serves no purpose and is a nuisance. [131080970060] |You have to press key to see if you are connected which is a pain. [131080970070] |We are running in runlevel 3, so the console is in text mode, no X11 involved. [131080980010] |Try using this: [131080990010] |I've implemented and tested the following configuration, which works fine on sles10, my workhorse at the moment. [131080990020] |In [131080990030] |add [131080990040] |it looks like that is all it takes. [131080990050] |Thanks for Uku Loskit and Gilles for the push in the right direction. [131081010010] |Universal Serial Bus (USB) establishes a connection between devices and a host computer [131081020010] |.dtors looks writable, but attempts to write segfault [131081020020] |This is Ubuntu 9.04, 2.6.28-11-server, 32bit x86 [131081020030] |For the uninitiated: gcc creates a destructor segment, .dtors, in the elf executable, which is called after main() exits. [131081020040] |This table has long been writable, and it looks like it should be in my case (see readelf output). [131081020050] |But attempting to write to the table causes a segfault. [131081020060] |I realize there has been a movement toward readonly .dtors, plt, got lately, but what I don't understand is the mismatch between readelf and the segfault. [131081030010] |I can tell why it's failing, although I don't actually know what part of the system is responsible. [131081030020] |While .dtors is marked writable in the binary, it looks like it (along with .ctors, the GOT, and a few other things) are being mapped into a separate, non-writable page in memory. [131081030030] |On my system, .dtors is getting put at 0x8049f14: [131081030040] |If I run the executable and check /proc/PID/maps, I see: [131081030050] |.data/.bss are still writable in their own page, but the others in 0x8049000-0x804a000 aren't. [131081030060] |I assume this is a security feature in the kernel (as you said, "there has been a movement toward readonly .dtors, plt, got lately"), but I don't know specifically what it's called (OpenBSD has something very similar called W^X; Linux has PaX, but not built into most kernels) [131081030070] |You can get around it with mprotect, which lets you change the in-memory attributes of a page: [131081030080] |With that, my test program doesn't crash, but if I try to overwrite the end sentinel of .dtors (0x8049f18) with the address of another function, that function still doesn't execute; that part I can't figure out. [131081030090] |Hopefully somebody else knows what's responsible for making the page readonly, and why modifying .dtors doesn't seem to do anything on my system [131081040010] |Those sections are marked GNU_RELRO (readonly relocations), which means that as soon as the dynamic loader has fixed up (at load time, there are no lazy relocations there) all the relocations, it marks those sections read-only. [131081040020] |Note that most of .got.plt is on another page, so doesn't get the treatment. [131081040030] |You can see the linker script with ld --verbose, if you search for RELRO you'll find something similar to: [131081040040] |which means that the RELRO sections end 12 bytes into .got.plt (pointers to dynamic linker functions are already resolved, so can be marked read-only). [131081040050] |The hardened Gentoo project has some documentation about RELRO at http://www.gentoo.at/proj/en/hardened/hardened-toolchain.xml#RELRO. [131081060010] |Grub is a modern boot loader installed by most linux distributions [131081070010] |

    Further reading

    [131081070020] |
  • Why do we use su - and not just su?
  • [131081080010] |su - Substitute user identity; used to run a shell as a different user [131081100010] |vi - Editor installed on just about any Unix system [131081120010] |Updating FreeBSD 8.0 to 8.1 (methods and policy) [131081120020] |I have 8.0-RELEASE-p4 + few ports installed. [131081120030] |I wonder wheather I should update to 8,1. [131081120040] |
  • How long is 8.0 supported?
  • [131081120050] |
  • How to update the system? [131081120060] |I couldn't find anything about it in handbook.
  • [131081120070] |SOLUTION (based on gvkv answer): I take the liberty of describing all steps I've done at the end: [131081130010] |
  • According to http://security.freebsd.org/#sup, FreeBSD 8.0 is supported until November 30, 2010.
  • [131081130020] |
  • We're still trying to figure out the "Best way" to do this. [131081130030] |There's more then one way to do it, and the FreeBSD docs are too ambiguous. [131081130040] |Also keep in mind that "Updating FreeBSD" is considered a separate topic from "Updating the most software (e.g. ports and packages) on your system".
  • [131081130050] |The page at http://www.freebsd.org/doc/handbook/updating-freebsdupdate.html talks about updating FreeBSD. [131081140010] |You also might want to see how to rebuild the base system when updating. [131081140020] |That is how I do it... [131081150010] |
  • The 8.0-RELEASE branch will be supported until November 20, 2010. [131081150020] |If you want stay on the the 8 branch (RELENG-8) you will have least until July 31, 2012; if there are any update releases in that branch then you will have until at least two years past the release date of the point release See the links given by Stefan Lasiewski.
  • [131081150030] |
  • Updating is as easy as following the instructions.
  • [131081160010] |diff - Command-line tool to display the differences between two files, or each corresponding file in two directories. [131081180010] |The C programming language; used to write large portions of Unix [131081210010] |mv - Command-line tool to move a file [131081230010] |apt-get - Command-line tool used to work with Ubuntu's Advanced Packaging Tool (APT) [131081240010] |For more information please see the official website. [131081250010] |The python programming language [131081260010] |emacs - A class of text editors, usually characterized by their extensibility [131081280010] |Red Hat Enterprise Linu [131081300010] |sed - A command-line stream editor for filtering and transforming text [131081320010] |FreeBSD: Which branch is supported for longer? [131081320020] |Can someone help explain the FreeBSD support policy? [131081320030] |I'm looking at http://security.freebsd.org/#sup , which says: [131081320040] |We haven't decided between FreeBSD 7.x and FreeBSD 8.x yet. [131081320050] |We want to go with a FreeBSD branch which is supported for a while. [131081320060] |Which release will be supported for longer? [131081320070] |RELENG_7, or RELENG_7_3 ? [131081320080] |RELENG_7 says "last release", which is a big ambiguous. [131081330010] |It's difficult to know exactly how long a branch (RELENG_X) will be supported since FreeBSD does not have a fixed release schedule. [131081330020] |As you can see from your chart, FreeBSD branches are supported for 2 years from the last release. [131081330030] |What this means is that since there is an expected 7.4 release, the FreeBSD 7 branch will supported for two years past whatever the release date of 7.4 is. [131081330040] |So, in this case you will have until at least August 31, 2012 on branch RELENG_7 (if it were to be released right now--I don't think it will be but that's the math). [131081340010] |I'd say 8.0 is more likely to be longer supported. [131081340020] |Usually the previous branch contains mostly fixes and most heavy support goes to new branch. [131081340030] |The only exceptions may be releases that are big failures and in such cases 8.0 could get 'forgotten'. [131081340040] |But it is rather rare case. [131081350010] |The RELENG_N branch will always be supported longer than RELENG_(N-1) branch. [131081350020] |The more important question is "Should I install a N.0 release or do you wait for N.1?" [131081350030] |Since FreeBSD 8 is already at 8.1 I would recommend 8.1 for any new installs unless you have a specific reason for not wanting the latest version. [131081350040] |To answer your specific question: RELENG_7 will be supported at least as long as RELENG_7_3 but if RELENG_7_4 is released then RELENG_7 will be support as long as 7_4. [131081360010] |svn - A centralized version control system [131081380010] |awk - pattern-directed scanning and processing languag [131081400010] |An open source telnet and SSH Client for the Windows and Unix platforms. [131081420010] |Virtual Network Computing (VNC) is a graphical desktop sharing system that can be used to control a computer remotely [131081430010] |Linux equivalent for Microsoft Visio? [131081430020] |Visio is a great tool for creating diagrams, flowcharts, prototyping, etc. [131081430030] |But it is Windows-only and is not free. [131081430040] |Are there any graphical tools for Linux that can do many of these same tasks well? [131081440010] |How about Pencil? [131081440020] |http://pencil.evolus.vn/en-US/Home.aspx [131081450010] |There's Dia.. [131081450020] |Not nearly as many features as Visio, but does diagrams: http://live.gnome.org/Dia [131081460010] |Dia is a good one. [131081460020] |It lets you draw different kind of diagrams including flowcharts, UML diagrams, ERD, network graphs and so on. [131081470010] |For UML and DB Diagrams, you could use UMLet. [131081480010] |Kivio, as the name kinda implies, is KDE's competitor to Visio. [131081480020] |It is a part of the KOffice suite. [131081480030] |Note: KOffice, as well as some of its applications were recently renamed. [131081480040] |KOffice is now called Calligra Suite and Kivio is called Calligra Flow. [131081480050] |However, there has not yet been a release since the rename. [131081490010] |You can find all the open source alternatives to visio here http://www.osalt.com/visio and any other equivalent for a commercial software [131081500010] |diagramly works on Linux. [131081510010] |WireframeSketcher is a cross-platform tool that can be used for prototyping. [131081520010] |I used "DIA" and "UMBRELLO", both fine, but not like Visio. [131081530010] |I found THIS link, you should find all the alternatives there. good luck! [131081540010] |How can I call a .NET web service from PHP? [131081540020] |The web service could be a SOAP asmx or a WCF Service. [131081540030] |Assumption here is that the IIS on Windows is serving the web-service and Apache on Linux with PHP 5.3 consuming it. [131081550010] |The hosting system should not matter, calling a web service is the same (in fact, that's one of the points of setting up a web service). [131081550020] |PHP has built in SOAP objects (Manual Entry for it). [131081550030] |Those should be able to access it without any issue. [131081560010] |How to insert the result of a command into the text in vim? [131081560020] |For instance, :echo strftime(%c) will show the current time on the bottom, but how to insert this time string to the text (right after the cursor)? [131081570010] |Will put it on the next line, then you could press J (Shift+J)to join it up to the current position. [131081570020] |Or if you need it all in one command, you could do [131081570030] |or [131081570040] |depending on whether you want a space inserted before the date or not. [131081580010] |:r!date +\%c [131081580020] |see :help :r! [131081590010] |These commands will insert the output of strftime("%c") right where your cursor is: [131081590020] |and [131081590030] |There are other ways to do what you want (like, for example, those on Mikel's answer). [131081590040] |Edit: Even better, for in-place insert, use the = register as Chris Johnsen describes [131081600010] |You can use the expression register, "=, with p (or P) in normal mode or in insert mode: [131081600020] |In normal mode: ( here means Control+M, or just press Enter/Return) [131081600030] |In insert mode: ( has the same meaning as above, means Control+R) [131081600040] |If you want to insert the result of the same expression many times, then you might want to map them onto keys in your .vimrc: (here the and should be typed literally (a sequence of five printable characters—Vim will translate them internally)) [131081610010] |If you want to insert the output of a vim command (as opposed to the return value of a function call), you have to capture it. [131081610020] |This is accomplished via the :redir command, which allows you to redirect vim's equivalent of standard output into a variable, file, register, or other target. [131081610030] |:redir is sort of painfully inconvenient to use; I would write a function to encapsulate its functionality in a more convenient way, something like [131081610040] |Once you've declared such a function, you can use the expression register (as explained by Chris Johnsen) to insert the output of a command at the cursor position. [131081610050] |So, from normal mode, hit i^R=Exec('ls') to insert the list of vim's current buffers. [131081610060] |Be aware that the command will execute in the function namespace, so if you use a global variable you will have to explicitly namespace it by prefixing it with g:. Also note that Exec(), as written above, will append a terminating newline to even one-line output. [131081610070] |You might want to add a call to substitute() into the function to avoid this. [131081610080] |Also see http://stackoverflow.com/questions/2573021/vim-how-to-redirect-ex-command-output-into-current-buffer-or-file/2573054#2573054 for more blathering on about redir and a link to a related command. [131081620010] |How to transfer a VirtualBox OSE VM to the metal [131081620020] |I have a bunch of VirtualBox VMs (Linux and Windows) and would like to know how to transfer any of them to the metal. [131081630010] |I'm not sure if you can do this with windows guests. [131081630020] |I'll outline what I would do first to move any VM to physical disk, and some "hints" which may help with windows. [131081630030] |So, in general, you need an image of the virtual hard drive: [131081630040] |
  • Check which drives are available (following is a snippet): [131081630050] |
  • Select UUID above and convert it: [131081630060] |
  • Then, just copy this image to a harddrive, using dd.
  • [131081630070] |This should work for most Linux machines. [131081630080] |For windows, chances are, that you may have a lot of troubles. [131081630090] |I would start with creating a new hardware profile in the VM before I even try. [131081640010] |How to change a hostname [131081640020] |During Debian installation, I set my hostname to the wrong value, and now I would like to correct that. [131081650010] |The value is stored in /etc/hostname. [131081650020] |After modifying, apply it with /etc/init.d/hostname start [131081660010] |Various services may also depend on being able to resolve the hostname locally, which is often handled by an entry in the /etc/hosts file. [131081670010] |The hostname is stored in three different files: [131081670020] |
  • /etc/hostname Used as the hostname
  • [131081670030] |
  • /etc/hosts Helps resolving the hostname to an IP address
  • [131081670040] |
  • /etc/mailname Determines the hostname the mail server identifies itself
  • [131081670050] |You might want to have a deeper look with grep -ir hostname /etc [131081670060] |Restarting affected services might be a good idea as well. [131081680010] |How to find what other machines are connected to the local network [131081680020] |How can I see a list of all machines that are available on the LAN I'm am part of. [131081690010] |Install nmap and run nmap -sP . [131081700010] |How much do you know about the LAN in question? [131081700020] |I'm assuming you don't know anything just plugged in the cable or connected to wifi. [131081700030] |
  • Try requesting an IP address with DHCP. [131081700040] |Do you get one? [131081700050] |Then you already know a few things: the gateway IP, the DHCP server IP, the subnet mask and maybe DNS servers.
  • [131081700060] |
  • If you don't get one there is either no DHCP server or the network is MAC filtered.
  • [131081700070] |
  • Either way start capturing packets with wireshark. [131081700080] |If you are on wireless or connected to a hub it's easy. [131081700090] |If you are connected to a switch you can try MAC flooding to switch it back to "hub mode" but a smarter switch will just disable your port. [131081700100] |If you want to try it anyway ettercap can do this for you. [131081700110] |(Or macchanger and a shell script :) )
  • [131081700120] |
  • Looking at the packets you can find IP addresses but most importantly you can guess the network parameters. [131081700130] |If you suspect MAC filtering change you MAC address to one of the observed ones after it leaves (sends nothing for a while).
  • [131081700140] |
  • When you have a good idea about the network configuration (netmask, gateway, etc) use nmap to scan. [131081700150] |Nmap can do a lot more than -sP in case some hosts don't respond to ping check out the documentation. [131081700160] |It's important that nmap only works if your network settings and routes are correct.
  • [131081700170] |
  • You can possibly find even more hosts with nmap's idle scan.
  • [131081700180] |Some (most?) system administrators don't like a few of the above methods so make sure it is allowed (for example it's your network). [131081700190] |Also note that your own firewall can prevent some of these methods (even getting an IP with DHCP) so check your rules first. [131081700200] |Nmap [131081700210] |Here is how to do basic host discovery with nmap. [131081700220] |As I said your network configuration should be correct when you try this. [131081700230] |Let's say you are 192.168.0.50 you are on a /24 subnet. [131081700240] |Your MAC address is something that is allowed to connect, etc. [131081700250] |I like to have wireshark running to see what I'm doing. [131081700260] |First I like to try the list scan, which only tries to resolve the PTR records in DNS for the specified IP addresses. [131081700270] |It sends nothing to the hosts so there is no guarantee it is really connected or turned on but there is a good chance. [131081700280] |This mode obviously needs a DNS server which is willing to talk to you. [131081700290] |This may find nothing or it may tell you that every single IP is up. [131081700300] |Then I usually go for ARP scan. [131081700310] |It sends ARP requests (you see them as "Who has ? Tell " in wireshark). [131081700320] |This is pretty reliable since noone filters or fakes ARP. [131081700330] |The main disadvantage is that it only works on your subnet. [131081700340] |If you want to scan something behind routers or firewalls then use SYN and ACK scans. [131081700350] |SYN starts a TCP connection and you either get an RST or a SYNACK in response. [131081700360] |Either way the host is up. [131081700370] |You might get ICMP communication prohibited or something like that if there is a firewall. [131081700380] |Most of the time if a firewall filtered your packets you will get nothing. [131081700390] |Some type of firewalls only filter the TCP SYN packets and let every other TCP packet through. [131081700400] |This is why ACK scan is useful. [131081700410] |You will get RST in response if the host is up. [131081700420] |Since you don't know what firewall is in place try both. [131081700430] |Then of course you can use the ICMP-based scans with -PE -PP -PM. [131081700440] |An other interesting method is -PO with a non-existent protocol number. [131081700450] |Often only TCP and UDP is considered on firewalls and noone tests what happens when you try some unknown protocol. [131081700460] |You get an ICMP protocol unreachable if the host is up. [131081700470] |You can also tell nmap to skip host discovery (-Pn) and do a portscan on every host. [131081700480] |This is very slow but you might find other hosts that the host discovery missed for some reason. [131081710010] |I like the ip neigh command, that comes with IpRoute2. [131081710020] |However, I think it only works with arp-able nodes. [131081720010] |How to cut part from log file? [131081720020] |I have 8 Gb long log file (Rails production log), and I need to cut part between some dates (lines). [131081720030] |Which command i have to use, to do this? [131081730010] |Something like [131081730020] |tee cut-log allows you to see on screen what is being put in file cut-log. [131081730030] |EDIT: [131081730040] |To satisfy fred.bear's exacting standards, here's a sed solution (though arguably the awk solution is a lot prettier): [131081740010] |To print everything between FOO and BAR inclusive, try: [131081750010] |If in your log file you have the dates in this format YYYY-MM-DD, then, to find all entries for say, 2011-02-10, you can do: [131081750020] |Now, say, if you want to find the entries for 2011-02-10 and 2011-02-11, then, again use grep but with multiple patterns: [131081770010] |This will do what you want... [131081770020] |Both Including and Excluding the parameter dates are shown. [131081770030] |It tests for a (sorted) date in field 2... [131081770040] |Here is an example fo the test data [131081770050] |And here is the test-data generator. [131081780010] |Working with this size of files is always hard. [131081780020] |A way forward could be to split this file into a couple small ones, to do this you can use the split command. [131081780030] |Even thou it is split up you can still work with the file like if would be one using a bash for loop [131081780040] |But instead of the cat you can use inverted grep to get rid of unwanted data, that is irrelevant for this. (or the kind of refinement that you need). [131081780050] |At this point you will just work with a lot of smaller files, and the commands the others mentioned above will work smother on a lot of smaller files. [131081780060] |And when you are done, you can use a second for loop to build up the new smaller file again. [131081780070] |Update Since we start to split the data in multiple files, there is going to be a lot of work with the harddrive and that takes time. [131081780080] |(In this question apparently 5min). [131081780090] |On the other hand the next steps would probably be faster. [131081780100] |So this method is probably pointless for simple grep, awk, sed operation, but if the search patterns becomes more complicated it could become faster. [131081790010] |Is there a standard way to configure what policy iptables loads on startup under Debian? [131081790020] |I've just started playing with Debian 6.0 (Squeeze) in a VPS environment, as a way to learn more about system administration. [131081790030] |One thing I'd like to configure is a default packet filtering policy. [131081790040] |(Permit new and established SSH inbound, permit rate limited ICMP inbound, drop everything else inbound, permit anything outbound, etc etc) [131081790050] |I've determined how to build a policy which roughly meets my needs, however there doesn't seem to be an existing init script which specifies where iptables should load its policy from on startup. [131081790060] |Is there a standard way to do this on Debian? [131081790070] |I've seen various alternatives suggested, all of which seem to involve creating a shell script somewhere somewhere and invoking it in RC scripts, initscripts, /etc/network/interfaces... [131081790080] |Is there no 'blessed' way to do this in Debian? [131081790090] |It seems like a fairly glaring oversight. [131081800010] |There is no default standart way to setup a firewall in Debian, except maybe calling a script with a pre rule in the network configuration (/etc/network/interfaces) but there are many packages providing different ways to do it. [131081800020] |For example the packages uruk and iptables-persistent provide very simple scripts to load and backup a very simple firewall. [131081810010] |How to check why the cdrom cannot be open exclusivly [131081810020] |Sometimes when I try to burn image the message that /dev/sr0 cannot be open exclusively appears. lsof does not shows any program that opens /dev/sr0 (I checked for aliases too). [131081820010] |Usually that tells you which process is accessing or locking the resource. [131081830010] |Also, lsof can be a great help in determining who's using a filesystem. [131081830020] |sudo lsof | grep /media/cdrom [131081840010] |How to determine where an environment variable came from [131081840020] |I have a linux instance that I set up some time ago. [131081840030] |When I fire it up and log in as root there are some environment variables that I set up but I can't remember or find where they came from. [131081840040] |I've checked ~/.bash_profile, /etc/.bash_rc, and all the startup scripts. [131081840050] |I've run find and grep to no avail. [131081840060] |I feel like I must be forgetting to look in some place obvious. [131081840070] |Is there a trick for figuring this out? [131081850010] |Other than find and grep (which seem to have already failed), I'm afraid not. [131081860010] |@Cian is correct. [131081860020] |Other than using find and grep, there isn't much you can do to discover where it came from. [131081860030] |Knowing that it is indeed an environment variable, I would attempt focusing your search in /etc/ and your home directory. [131081860040] |Replace VARIABLE with the appropriate variable you're searching for: [131081860050] |$ grep -r VARIABLE /etc/* [131081860060] |$ grep -r VARIABLE ~/.* [131081870010] |Check your startup scripts for files that they source using . (dot) or source. [131081870020] |Those files could be in other directories besides /etc and $HOME. [131081880010] |If you use the "env" command to display the variables, they should show up roughly in the order in which they were created. [131081880020] |You can use this as a guide to if they were set by the system very early in the boot, or by a later .profile or other configuration file. [131081880030] |In my experience, the "set" and "export" commands will sort their variables by alphabetical order, so that listing isn't as useful. [131081890010] |If you put set -x in your .profile or .bash_profile, all subsequent shell commands will be logged to standard error and you can see if one of them sets these variables. [131081890020] |You can put set -x at the top of /etc/profile to trace it as well. [131081890030] |The output can be very verbose, so you might want to redirect it to a file with something like exec 2>/tmp/profile.log. [131081890040] |If your system uses PAM, look for pam_env load requests in /etc/pam.conf or /etc/pam.d/*. [131081890050] |This module loads environment variables from the specified files, or from a system default if no file is specified (/etc/environment and /etc/security/pam_env.conf on Debian and Ubuntu). [131081890060] |Another file with environment variable definitions on Linux is /etc/login.defs (look for lines beginning with ENV_). [131081900010] |This is from a young Linux fanboy who came across Area 51 during the break between his classes. [131081900020] |It was me, and the first thing I did was to enter "linux" in the search box. [131081900030] |To my surprise there were a lot of people already committing to it (I was under the impression that there was no healthy community for *nix geeks). [131081900040] |So I committed, just before the private beta began. [131081900050] |I have been with the site since then (and became one of the first to get a Fanatic badge). [131081900060] |I make sure to read all questions and do everything proper with them. [131081900070] |Besides voting and answering, I often leave comments on how a post can be improved, encourage people to vote... [131081900080] |I see becoming a moderator as a chance to contribute more to the community that I learn from. [131081900090] |The day the nomination started, I was tempted, but felt undeserved. [131081900100] |However, after spending a few days thinking I am writing this to nominate myself. [131081900110] |At this point I see myself as a user who understands the site well enough to help other newer users. [131081900120] |I want to take my turn and share the hard work of moderators. [131081910010] |Iso booting with grub2 [131081910020] |I am using Linux Mint 10, and it is installed on sda8. [131081910030] |I edited /etc/grub.d/40_custom: [131081910040] |Then I ran sudo update-grub2. [131081910050] |After rebooting, I chose “Fedora ISO”. [131081910060] |The computer restarted. [131081910070] |I tried following this guide, but it didn't help. [131081910080] |Do I need to change the file permissions of the boot and casper folders or there is some other problem? [131081920010] |You have to make sure that the lines point to correct file locations. [131081920020] |For example, I have a Fedora ISO with me, but I cannot find the file /boot/vmlinuz or /boot/initrd.img in it. [131081920030] |At the very least you should have: [131081920040] |Maybe you misunderstood that, but linux and initrd above point to the entries inside the ISO, not on your hard drive. [131081940010] |Access control to resources such as files and directories. [131081940020] |In Unix, permissions may be specified for an owner, group, or all users [131081950010] |mouse scroll in KVM guest does not work [131081950020] |When the mouse is captured my the KVM guest, the mouse in the guest can move, and all the buttons are work. [131081950030] |But the wheel scrolling function does not work. [131081950040] |Any idea how to fix this? [131081960010] |How do I configure OpenVPN as a Gateway client for Witopia? [131081960020] |I have the following setup: [131081960030] |
  • Witopia SSL account
  • [131081960040] |
  • Synology 409 NAS (with OpenVPN and Apache etc)
  • [131081960050] |
  • PS3
  • [131081960060] |
  • Mac
  • [131081960070] |
  • Apple AirPort router (configured for NAT)
  • [131081960080] |
  • Locked IPT-box (using DHCP and NAT traversing)
  • [131081960090] |- [131081960100] |Requirements: [131081960110] |
  • The NAS should handle the VPN connection with Witopia.
  • [131081960120] |
  • All connections originating outside the Router and who are routed to the NAS or Mac should reach its target. [131081960130] |Nothing originating from outside should enter the VPN tunnel.
  • [131081960140] |
  • All connections that originates behind the router and are "aimed specificly" at the NAS should reach it and not be hijacked by the VPN tuinnel.(NFS, SBM, HTTP etc)
  • [131081960150] |
  • Connections that originates from applications on the NAS and terminates anywhere outside the router should go through the VPN tunnel.
  • [131081960160] |
  • All connections from PS3 that terminates outside router should go through the tunnel. [131081960170] |Connections from PS3 to NAS (SMB, NFS, DLNA etc) should not go through the tunnel, but be served by the NAS.
  • [131081960180] |
  • All connections to and from the IPT-box should go direct via the router, not the tunel.
  • [131081960190] |Is this possible with the hardware that I already have? [131081960200] |How and where should I configure it? [131081960210] |/BE [131081970010] |I'm using Astrill on my DD-WRT router. [131081970020] |This was so easy to setup and Astrill has VPN servers in more than 25 countries. [131081970030] |This is their demo video "How to setup OpenVPN on DD-WRT" - https://www.astrill.com/public/DDWRT-Setup2.mp4 I think you can ask them for a free trial to make sure it works OK on your hardware. [131081970040] |But if you have Witopia account already, it's better to contact their support and they should help you. [131081980010] |Add all users in one group to another group? [131081980020] |I have some users in a group called aa and I need to give them the rights to write to a folder which is currently tomcat:tomcatdeploy. [131081980030] |The straightforward way to do this is to add all the users to the tomcatdeploy group, one-by-one. [131081980040] |Is it possible to say that members of group aa are also automatically members of tomcatdeploy by somehow adding the aa group to the tomcatdeploy group? [131081980050] |Or is that trying to push the UNIX permissions scheme too far? [131081990010] |You could use the lid command to get a list of users in aa, and the loop over that list to add them to tomcatdeploy: [131082000010] |You didn't mention what platform you are on, but a Linux system from within the last 5 years (or more, probably) supports ACLs, in addition to the traditional Unix file system permissions. [131082000020] |By using ACLs, you can add the 2nd group to the directory with write access: [131082000030] |You can use getfacl to display the ACLs and ls -l will show a '+' at the end of the usual symbolic mode string. [131082010010] |Add latency when accessing certain websites under linux [131082010020] |I've been playing with tc on my local machine. sudo tc qdisc add dev eth0 root netem delay 2000ms adds a 2-second latency to all traffic going through eth0. [131082010030] |What I want is to be able to limit this latency to certain websites. [131082010040] |I've tried playing with filters on specific IP addresses (Google's, for instance) but nothing seems to work. [131082010050] |Any ideas on how I can do this using tc? [131082010060] |Is it the best tool for the job? [131082020010] |You can mark packets with iptables in the mangle table and assign a class id to shape with tc. [131082020020] |The lartc.org site has very detailed informations on how to do it (and much more). [131082040010] |mount command, allows mounting of a file system at a specified point in the file hierarchy [131082060010] |A general purpose operating system built on top of the Linux kernel, developed by the community-supported openSUSE Project [131082090010] |Are usb pendrives/sticks random access? [131082090020] |Do USB pendrives/sticks have random access (i.e. accessing one block will take equal amount of time, regardless of the previous block read) characteristic? [131082110010] |Evolution provides integrated mail, addressbook and calendaring functionality to users of the GNOME desktop [131082130010] |The apache web server [131082140010] |Mounting USB disks automatically (How it works) [131082140020] |Background: I am trying to mount a usb disk as read only but my ubuntu install is mounting it rw when I plug the disk in. [131082140030] |I can unmount the disk manually and remount it manually as read only with the umount and mount commands but thats no fun. [131082140040] |Could someone give me a quick explanation on how exactly usb mounts are automatically done on a typical linux system (udev? historical background is nice too) and maybe how I can tweak this process into letting me read the disk ro? [131082140050] |Thanks. [131082140060] |Edit: I'm using gnome if that helps at all. [131082140070] |Edit2: In my haste I forgot to provide a bit more information. [131082140080] |This is what the disk looks like from the output of 'mount'. [131082140090] |/dev/sdb1 on /media/LaCie type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) [131082140100] |Edit3: This also may be relavent in its own way. [131082140110] |In the mount output I also have the following: [131082140120] |gvfs-fuse-daemon on /home/fletcher/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=fletcher) [131082140130] |I thought this might have been related to the above fuseblk mount, but what I found out was this. [131082140140] |Gvfs is the Gnome virtual file system. [131082140150] |It is a virtual filesystem built on top of the already existing kernel vfs. gvfs uses the GIO library (which is a VFS API) to access files, devices, remote network locations, etc. [131082140160] |In this case above ('gvfs-fuse-daemon') gvfs is using FUSE to mount files/locations/devices. [131082140170] |This is essentially what happens when you mount a remote network connection in Nautilus. [131082140180] |It will use FUSE to mount the location (inside?) the .gvfs directory, and then it will communicate with the gnome virtual file system layer to communicate with the new mount. [131082140190] |Basically this structure allows the user to dynamically mount new locations and interact with them through nautilus. [131082140200] |Just for reference: FUSE is a userspace filesystem, aka it allows the user to run mount even when that user is not root) [131082140210] |So where does that leave me? [131082140220] |Well the LaCie disk is being mounted with type fuseblk. [131082140230] |This is just a block device mounted with fuse. [131082140240] |So some daemon has autodetected the drive when it was plugged in and then gone ahead and run fuse to mount my block device. [131082140250] |So what daemon is this, and how is it configured (my guess is it is some internal gnome thing) is the most important question. [131082140260] |A secondary question is how the system automatically detected a newly inserted usb disk, but I think thats a bit of an aside and much lower level here (read: udev?). [131082140270] |Links: Gvfs: http://fedoraproject.org/wiki/Features/Gvfs gvfs-fuse-daemon: http://old.nabble.com/What-is-gvfs-fuse-daemon--td19458605.html FUSE: http://www.mjmwired.net/kernel/Documentation/filesystems/fuse.txt GIO: http://library.gnome.org/devel/gio/unstable/ch01.html [131082150010] |I tried to do this on my computer and it's work :) [131082150020] |First I get a name for my device : [131082150030] |In my case it is /proc/disk/by-id/usb-09a6_8001 [131082150040] |I added this line in /etc/fstab : [131082150050] |And it's working, when I plug my usbkey, it's mounted ro and owned by my user. [131082160010] |How to clear DNS cache on DD-WRT [131082160020] |How can I clear the DNS cache in DD-WRT on my router? [131082160030] |DD-WRT uses the dnsmasq daemon. [131082170010] |According to Flush dnsmasq dns cache: [131082170020] |dnsmasq is a lightweight DNS, TFTP and DHCP server. [131082170030] |It is intended to provide coupled DNS and DHCP service to a LAN. [131082170040] |Dnsmasq accepts DNS queries and either answers them from a small, local, cache or forwards them to a real, recursive, DNS server. [131082170050] |This software is also installed many cheap routers to cache dns queries. [131082170060] |Just restart to flush out dns cache: [131082190010] |DD-WRT is free Linux-based firmware for several wireless routers [131082210010] |Disk partitioning is the act of dividing a hard disk drive into multiple logical storage units referred to as partitions, to treat one physical disk drive as if it were multiple disks [131082230010] |A profile and configuration utilities for GNU Screen, previously called screen-profiles [131082250010] |Abbreviation for internationalizatio [131082260010] |Integrated Development Environmen [131082280010] |Linux-like environment for Windows making it possible to port software running on POSIX systems (such as Linux, BSD, and Unix systems) to Windows [131082300010] |Compiz is one of the first compositing window managers for the X Window System that uses 3D graphics hardware to create fast compositing desktop effects for window management. [131082320010] |A GNU/Linux based firmware program for embedded devices such as residential gateways and routers [131082340010] |A lightweight desktop environment for UNIX-like operating systems [131082360010] |Xen is a virtual-machine monitor for IA-32, x86-64, Itanium and ARM architectures [131082380010] |ZFS is a combined file system and logical volume manager designed by Sun Microsystems. [131082400010] |File Allocation Table (FAT) is a computer file system architecture now widely used on many computer systems and most memory cards, such as those used with digital cameras [131082410010] |

    Overview of a computer boot sequence

    [131082410020] |When a computer boots, it first runs firmware stored in persistent memory. [131082410030] |On PCs, this firmware is called the BIOS. [131082410040] |If you have a problem at this stage, it's off-topic for this site, since Unix is not involved yet, but try asking on Super User. [131082410050] |The firmware then loads a bootloader, typically from disk or from the network. [131082410060] |Although bootloaders are not part of the operating system proper, questions about bootloaders typically associated with unix and Linux are welcome on this site. [131082410070] |The bootloader loads the operating system kernel. [131082410080] |The kernel initializes itself and some hardware devices, then on typical Unix systems runs the init program. [131082410090] |Init in turn starts system services, including programs that present a login prompt. [131082410100] |

    Related tags

    [131082410110] |
  • boot-loader for bootloaders in general
  • [131082410120] |
  • dual-boot if you have more than one operating system
  • [131082410130] |

    Bootloaders

    [131082410140] |
  • grub (and grub2): a versatile bootloader used by many Linux distributions
  • [131082410150] |
  • lilo: the traditional bootloader for Linux on PCs
  • [131082410160] |

    Kernel boot sequence

    [131082410170] |
  • initrd, initramfs: on Linux, a virtual RAM disk that is loaded by the kernel before the “real” OS starts. [131082410180] |The code in the RAM disk typically loads additional drivers (modules).
  • [131082410190] |

    Unix boot sequence

    [131082410200] |
  • init: process number 1
  • [131082410210] |
  • init-script: scripts that start and stop services, invoked by init at boot time
  • [131082410220] |
  • upstart: a replacement for the traditional init program
  • [131082420010] |This tag covers both bootloader issues (what happens before the operating system starts) and the starting up of the operating system [131082440010] |What is Fedora's equivalent of 'apt-get purge'? [131082440020] |In Debian, there's at least two ways to delete a package: [131082440030] |
  • apt-get remove pkgname
  • [131082440040] |
  • apt-get purge pkgname
  • [131082440050] |The first preserves system-wide config files (i.e. those found in "/etc"), while the second doesn't. [131082440060] |What is Fedora's equivalent of the second form, purge? [131082440070] |Or maybe I should rather ask if yum remove pkgname actually preserves config files. [131082450010] |yum remove is not guaranteed to preserve configuration files. [131082450020] |As stated in the yum HOWTO: [131082450030] |In any event, the command syntax for package removal is: [131082450040] |As noted above, it removes package1 and all packages in the dependency tree that depend on package1, possibly irreversibly as far as configuration data is concerned. [131082450050] |Update [131082450060] |As James points out, you can use the rpm -e command to erase a package but save backup copies of any configuration files that have changed. [131082450070] |For more information, see Using RPM to Erase Packages. [131082450080] |In particular: [131082450090] |It checks to see if any of the package's config files have been modified. [131082450100] |If so, it saves copies of them. [131082460010] |There is no equivalent for "purge", just use yum remove package. [131082460020] |Also you can use yum reinstall package, when you wanna reinstall some package... [131082480010] |wget - command-line utility to download content non-interactively [131082500010] |kill - Send a specified signal to a process or process group [131082520010] |cp - Command-line tool to copy a fil [131082530010] |BusyBox (“the Swiss Army knife of embedded Linux”) combines common command-line utilities into a single executable. [131082530020] |It includes a shell, file utilities such as ls and cp, text utilities such as grep and sed, basic system utilities such as init and syslogd, system administration utilities such as fsck and sysctl, networking utilities such as ping and ifconfig, and more. [131082530030] |It is intended for small Linux systems such as boot floppies and embedded devices. [131082530040] |

    External links

    [131082530050] |
  • BusyBox FAQ
  • [131082530060] |
  • BusyBox command help (note that some commands and options are optional and may not be present in your binary)
  • [131082540010] |BusyBox combines tiny versions of many common UNIX utilities into a single small executable [131082560010] |Virtual Private Networ [131082580010] |An x86 virtualization software package developed by Sun Microsystems [131082600010] |The tar archive format and/or the command-line utility for working with tar files [131082610010] |Is there a (light-weight) replacement for `rxvt-unicode`? [131082610020] |I am currently using rxvt-unicode as a terminal emulator. [131082610030] |Since I also like the configurability of terminal emulators from GNOME and KDE, I wonder whether there is some kind of replacement for rxvt-unicode with more features but not as many dependencies as the GNOME or KDE terminals. [131082610040] |Specifically, I am interested in the following: [131082610050] |
  • Tabs
  • [131082610060] |
  • Font size changes on the fly (via CTRL++ or something similar)
  • [131082610070] |
  • UTF-8 support (OK, urxvt already has this, obviously)
  • [131082610080] |
  • Possibility to open a new tab/new window at the current directory
  • [131082620010] |Have you tried the xfce terminal emulator? [131082620020] |It meets all of your requirements except for the font zoom. [131082630010] |Requiring tabs is going to kill you for size. [131082630020] |For a lightweight terminal that has "tabs", I would recommend eterm with escreen. [131082630030] |Escreen is an interface layer which allows Eterm to support multiple subshell sessions within a single window by interoperating with GNU screen. [131082630040] |Thus, on the surface, it works similarly to the "tabbed" sessions you're used to. [131082630050] |If this doesn't work, you could check out the Fluxbox window manager. [131082630060] |Fluxbox allows you to group any windows together into a tabbed interface, and you can even switch tabs with CTRL+TAB. [131082630070] |In that case, you could use any lightweight terminal that suits your needs, whether it supports tabs or not: aterm, eterm, xterm, etc. [131082640010] |No, but: rxvt-unicode has tab support. [131082640020] |Add this to your .Xdefaults/.Xresources: [131082640030] |urxvt.perl-ext-common: default,tabbed [131082640040] |I also like to disable the 'selection' perl extension so that it is possible to triple-click-and-drag to select multiple complete lines (as in xterm). [131082640050] |Then your .Xdefaults would look like this: [131082640060] |urxvt.perl-ext-common: default,tabbed,-selection [131082660010] |A segmentation fault (often shortened to segfault) or bus error occurs when the hardware notifies a Unix-like Operating system about a memory access violation [131082680010] |A hostname is the label given to a device connected to a computer network [131082690010] |Is there a 'rc' configuration file for grep/egrep? (~/.egreprc?) [131082690020] |I usually do some grep when developing, and there are some extensions that I always don't want to look for (like *.pyc). [131082690030] |Is it possible to create a ~/.egreprc or something like that, and put some filtering to always not show pyc files in result. [131082690040] |Is this possible, or will I have to create an alias for using grep in this manner, and call the alias instead of grep? [131082700010] |Not in GNU grep/egrep, there isn't. [131082700020] |You probably want a specially-named alias or script to do this anyway, because someday later you might find yourself very confused when trying to look in something that you forgot matches your special configuration. [131082710010] |ack can do what you want and more. [131082710020] |It will ignore binary files by default and you can tell it to ignore other filetypes as you want --notext, --nohtml, etc. [131082710030] |It has the ability to define an rc file too so you can customize it with your own types. [131082720010] |There's no rc file for grep, but you can set the environment variable GREP_OPTIONS to a whitespace-separated list of options that will apply to all grep commands. [131082720020] |This requires GNU grep (the implementation on non-embedded Linux, Cygwin and some other systems) 2.4; the --exclude option is new to GNU grep 2.5. [131082720030] |Put something like this in your ~/.profile: [131082720040] |If you've done this but want to run grep once without those default options, you can override them (e.g. grep --color=never to force colors off) or empty the GREP_OPTIONS variable by running GREP_OPTIONS= grep …. [131082720050] |Another possibility is to define aliases for grep. [131082720060] |I use a variant of the following: [131082730010] |How to make a folder private? [131082730020] |I'm using Ubuntu with several users and I want to make folders viewable just for a specific user. [131082730030] |Any idea how to do it? [131082730040] |Thanks [131082740010] |Edit: On second thought, I realized that I may have misunderstood the question. [131082740020] |If you want the directory to only be visible to a specific user (i.e. for every other user, that list doesn't even show up in the directory listing), you can't do that without prohibiting other users from listing the contents of the parent directory. [131082740030] |So if the directory is /foo/bar then you can remove the x permission on /foo (for everyone but the owner) preventing other users from listing the contents of /foo, but you can't hide /foo/bar specifically (though you can certainly hide its contents). [131082740040] |Old answer: [131082740050] |Make that specific user the owner of the directory and then remove all permissions on that directory for everyone but the owner. [131082740060] |In the shell, you'd use chmod to do this: [131082740070] |If you're using a filemanager, just remove all the checkmarks in the permissions-tab except the ones in the "User"-column (the exact details depend on the filemanager of course). [131082750010] |This tutorial explains in all details how to set file permissions in Linux (and therefore also in Ubuntu) such that you can control the access. [131082760010] |If you prefer to do this from the GUI, this is explained in the help for the GNOME file browser (known as Nautilus): [131082760020] |http://library.gnome.org/users/user-guide/2.29/nautilus-permissions.html.en#nautilus-permissions-folder [131082760030] |Basically, you just select the folder, and then from the menu choose File->Properties->Permissions. [131082760040] |Otherwise, sepp2k's answer is absolutely correct, and txwikinger's link is great if you're interested in an in-depth explanation of unix file permissions.