[131078060010] |Output of the "last" command [131078060020] |Can anybody explain me what is the meaning of the last column of the output of the last command? [131078060030] |I'm particularly interested in its meaning with respect to the reboot pseudo-user. [131078060040] |What does that 9+09:37 mean? [131078070010] |reboot and shutdown are pseudo-users for system reboot and shutdown, respectively. [131078070020] |That's mechanism for logging that information, with kernel versions to same place, without creating any special formats to wtmp binary file. [131078070030] |Quote from man wtmp: [131078070040] |The wtmp file records all logins and logouts. [131078070050] |Its format is exactly like utmp except that a null username indicates a logout on the associated terminal. [131078070060] |Furthermore, the terminal name ~ with username shutdown or reboot indicates a system shutdown or reboot and the pair of terminal names |/} logs the old/new system time when date(1) changes it. [131078070070] |wtmp binary file do not save other than timestamp for events. [131078070080] |For example last calculates additional things, such as login times. [131078070090] |Last column (in parenthesis) is length of event. [131078070100] |For user reboot it's uptime. [131078070110] |After latest reboot, time is current uptime. [131078070120] |For earlier reboots, time is uptime after that reboot (so in last line of my example it's uptime until first line; there was no reboots in between). [131078070130] |Number(s) before + means number of days. [131078070140] |In last line it's 9 days, 9 hours and 37 minutes, and in first line current uptime is 1 hour and 7 minutes. [131078070150] |Note however that this time is not always accurate, for example after system crash and unusual restart sequence. last calculates it as time between it and next reboot/shutdown. [131078080010] |Reading EDID from EEPROM [131078080020] |From the link: http://en.wikipedia.org/wiki/Extended_display_identification_data [131078080030] |The EDID is often stored in the monitor in a memory device called a serial PROM (programmable read-only memory) or EEPROM (electrically erasable PROM) and is accessible via the I²C bus at address 0x50.[1] The EDID PROM can often be read by the host PC even if the display itself is turned off. [131078080040] |How can I read that information? [131078090010] |A few days ago, I was also wondering. [131078090020] |I found Xorg implementation, but I didn't look at the details. [131078090030] |Also, apparently, the kernel module i2c-dev can be used to read EDID. [131078090040] |http://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/ddc [131078100010] |You may want to try [131078100020] |http://polypux.org/projects/read-edid/ [131078110010] |Answer to my own Question: [:)] [131078110020] |
  • i2cdetect -l : To enlist the existing i2c adapter
  • [131078110030] |
  • echo Y | i2cdump $i 0x50 : Where i is the device id of each /dev/i2c-*
  • [131078110040] |
  • The above command will print the Raw-EDID for the given I2C-Adapter.
  • [131078110050] |
  • Now custom C ( or any ) programming language can be used to parse the information to give the neat description of the underlying device.
  • [131078110060] |[ Though I used the i2c-tools, still It will be far better to use the low-level assembly programming to read the EEPROM values. ] [131078120010] |ps command not giving output in home directory [131078120020] |I am logging into a solaris server, switching to bash, then switching to another user "sruser" and switching to bash. [131078120030] |/home/batch/sruser/ is the home directory of the user "sruser". [131078120040] |The issue is ps is not giving any output when run in the home directory - [131078120050] |Don't know what could be the issue. [131078120060] |Don't even know where to start looking for whatever could be the issue. [131078130010] |$PATH has .(cwd) and there is a file ps with executable permissions in the home directory. [131078130020] |Hence the command was not giving any output. [131078140010] |Include / in symlink to a directory? [131078140020] |Symlinking to a directory gives to different results with ls -l depending on whether I ln -s dir or ln -s dir/. [131078140030] |But what's the actual difference, and which one should I prefer why? [131078150010] |Interesing question. [131078150020] |I've made small test: [131078150030] |As you can see, there is no difference in number of system calls (at least for ls) and traces looks very similar. [131078150040] |Howewer, this is just dump test and I'm not sure - there might be some differences. [131078160010] |The only thing I can think of is that it "protects" you from someone deleting the directory and creating a file. [131078160020] |The version with the slash breaks when the target is replaced with a file. [131078170010] |There's no difference. [131078170020] |The final slash might have ended up there because of shell completion: with some configuration, ln -s tarTabSpacelink completes to ln -s target/ link. [131078180010] |Where is the setting that sets Experimental repo to lower priority? [131078180020] |Suppose that I have enabled Experimental in my "/etc/apt/sources.list". [131078180030] |When I need to install anything from it, I have to explicitly specify on the command: [131078180040] |Where is such a setting stored? [131078180050] |I expected that it would be some sort of pinning, but I can't find anything suggesting such in "/etc/apt/". [131078180060] |Also, is it possible to avoid the behavior? [131078190010] |The pinning of experimental and backport is in the repository itself (in the Release files). [131078190020] |You can change the pinning as you wish, but you have to explicitly do it. [131078190030] |Look in experimental Release file for the line NotAutomatic: yes. [131078190040] |It comes from there. [131078190050] |You can see actual pinning values with apt-cache policy wajig. [131078200010] |Is there a tool that hooks to two directories and synchronizes them as soon as something changes? [131078200020] |I have huge simulation directories on the two machines server and on cluster which I want to keep synchronized. [131078200030] |The basic situation is that files rarely change but when a simulation is done many big files change at once which I'd like to have synchronized as soon as they are closed. [131078200040] |Therefore I assume a cronjob using rsync is not ideal, cronjob because it calls rsync at a fixed time interval which I'd probably choose either too large or to small, rsync because it shouldn't have to check for modifications of files since I and the simulation job are the only ones accessing them. [131078200050] |So my idea would be using inotify (see this question) to detect changes in the simulation directory in a loop and then fork into a rsync for the changed files. [131078200060] |However I'm not sure whether that might accidentally skip files closed just when the fork is done. (and possibly I'd also end up in an infinite loop due to inotify calling rsync to sync the just synchronized file again...) [131078200070] |So before I try too much there, I repeat my question: [131078200080] |Is there a tool that hooks to two directories and synchronizes them as soon as something changes? [131078200090] |(basically this would be something like an offline dropbox, I guess) [131078210010] |Check out lsyncd. [131078210020] |Lsyncd watches a local directory trees event monitor interface (inotify). [131078210030] |It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. [131078210040] |By default this is rsync. [131078210050] |Lsyncd is thus a light-weight live mirror solution that is comparatively easy to install not requiring new filesystems or blockdevices and does not hamper local filesystem performance. [131078210060] |It's not two-way, but from your question I understood you don't need that either. [131078210070] |If you need two-way synchronization, Unison is good answer, except there is no inotify support. [131078210080] |Also, check out this question. [131078210090] |Third thing for two-way synchronization is DRBD, block-level realtime synchronization system, included in mainline kernel. [131078210100] |Unfortunately, as it is almost synchronous, it requires fast internet connection. [131078220010] |How to set default APT source [131078220020] |Running cat /etc/apt/apt.conf gives: [131078220030] |I thought doing that would disallow the command apt-get install wajig to work, if a newer version was available anywhere but Stable. [131078220040] |How do I set up APT so that such a version would require me to specify the repository name in order to install it (e.g. apt-get --target-release testing wajig)? [131078220050] |[update] I wasn't aware that my question wasn't so clear. [131078220060] |I want this to be an archive-wide setting (i.e. it should apply to each package in the Stable archive), not to some specific package. [131078230010] |If there is a wajig package with a positive pin priority in any of your sources, apt-get install wajig will install it. Default-Release works like setting a high priority to that particular release. [131078230020] |If I understand correctly, you'd like apt-get install wajig to work if squeeze has the latest version and not to work otherwise; I don't think that's possible. [131078240010] |Put this in your "/etc/apt/preferences": [131078240020] |This is from man apt_preferences where P means Pin-Priority: [131078240030] |See this Debian wiki page for something gentler than the manpage. [131078250010] |Connected Display Device Information [131078250020] |When I connect or disconnect the USB device, udev shows/monitors that event. [131078250030] |But udev is not enough smart to detect the plug-out or plug-in of the monitor. [131078250040] |Is there any way/tool/utility from which I will get real-time information about the plug-out or plug-in of a monitor? [131078260010] |I think there is no daemon for that. [131078260020] |You could however write a script to periodically parse xorg.log or you may be able to use xrandr, but I am not at my Linux box right now, so I can't tell exactly. [131078270010] |Lightweight MS Paint/MacPaint equivalent FLOSS in Linux? [131078270020] |What is an equivalent to MS Paint and MacPaint that works on Linux? [131078270030] |There is a BSD-licenced one called Paintbrush that's great for Mac OS X, but I have not found one on Linux. [131078270040] |I hope it can be lightweight, i.e. few dependencies, don't need GNOME, KDE, etc., but anything is fine. [131078280010] |Mypaint is my favorite alternative to MSpaint. [131078290010] |Depending on what you're using this for, TuxPaint might fit the bill. [131078290020] |It's designed for children and my kids find it fun to use. [131078300010] |Tux Paint - http://tuxpaint.org/ [131078300020] |kolourpaint - http://www.kde.org/applications/graphics/kolourpaint/ [131078310010] |Here's some basic functionality offered by APT: [131078310020] |
  • install/remove a software package (requires admin privileges): [131078310030] |
  • display basic info about a package [131078310040] |
  • display packages whose names and/or descriptions match the search pattern [131078310050] |Here's a usage comparison with other package managers. [131078310060] |[note] APT keeps a log of some of its activities at "/var/log/apt/history.log". [131078320010] |APT (Advanced Packaging Tool) is the standard software package management system for Debian and its derivatives [131078330010] |vim: Can I make *only* y perform a copy? [131078330020] |I'm learning to like vim, but one thing that I haven't yet fully embraced is the copy and paste situation. [131078330030] |In a muggle text editor I would Ctrl-C copy a piece of text and then edit/delete and then paste the text somewhere else. [131078330040] |However in vim I can't do this because after I copy text with y, modifying the text often involves d and x which obliterates my original text. [131078330050] |Is there any way to make d and x not copy the text? [131078330060] |Or otherwise, how should I be using these vim features? [131078340010] |The commands you're talking about use the so-called "unnamed" register. [131078340020] |It is meant only for very short term use. [131078340030] |"I'm gonna yank this from here and put it over there" sort of things. [131078340040] |When you want to hold onto something for more than a few seconds while you go and do something else, you should be using named registers. [131078340050] |For instance, " a y y will yank the line the cursor is on into the a register. [131078340060] |Nothing will clobber it accidentally, unless you type another command involving the a register. [131078340070] |Minutes, hours, even days later, you can type " a p to drop that yanked copy of the line from the a register below the one the cursor is on. [131078340080] |(For this to work really well, you should have a line like set viminfo='50,\"1000 in your ~/.vimrc file, to tell it to remember things like register contents across Vim sessions. [131078340090] |You can then go on vacation between yank and put!) [131078340100] |There are 26 named registers (a-z). [131078340110] |If you give them as uppercase instead of as shown above, you append to the current register contents instead of replacing them. [131078340120] |So, you can build up something really complex in, say, register h one piece at a time, then plop it all down at once with " h p. [131078340130] |Notice that the register name is optional. [131078340140] |This implies that there may be many commands you already know and use where you could be using named registers. [131078340150] |Say :help registers in Vim to get some idea of the possibilities. [131078340160] |Also, get a Vi mug. [131078350010] |The 9 previous yanks are saved in registers called 1 through 9. [131078350020] |You can recall the next-to-last yank with "1p, the previous one with "2p and so on. [131078350030] |The command :reg shows the registers that are available for pasting. [131078350040] |If you want a yank to last longer, use a letter register. [131078350050] |For the more obscure yank-related commands, start reading at :help " in the manual. [131078360010] |Using grep/sed/awk to classify log file entries [131078360020] |I need to process a very large log file with many lines in different formats. [131078360030] |My goal is to extract unique line entries who have the same starting pattern, e.g. '^2011-02-21.*MyKeyword.*Error', effectively obtaining a list of samples for each line pattern, therefore identifying the patterns. [131078360040] |I only know a few patterns so far, and browsing through the file manually is definitely not the option. [131078360050] |Please note that besides the known patterns, there is a number of unknown ones too, and I'd like to automate extracting those as well. [131078360060] |What is the best way to do this? [131078360070] |I do know regular expressions quite well, but haven't done much work with awk/sed which I imagine would be used at some point in this process. [131078370010] |If I understand correctly, you have a bunch of patterns, and you want to extract one match per pattern. [131078370020] |The following awk script should do the trick. [131078370030] |It prints the first occurrence of the given pattern, and records that the pattern has been seen so as not to print subsequent occurrences. [131078370040] |Here's a variant that keeps one MyKeyword.*Error line per day. [131078380010] |Bash Script on Startup? (Linux) [131078380020] |Is there any way to make/run a bash script on reboot (like in Debian/Ubuntu for instance, since thats what my 2 boxes at home have) [131078380030] |Also, any recommended guides for doing cron jobs? [131078380040] |I'm completely new to them (but they will be of great use) [131078390010] |On Ubuntu/Debian/Centos you can set up a cron job to run @reboot. [131078390020] |This runs once at system startup. [131078390030] |Use crontab -e to edit the crontab and add a line like the example below e.g. [131078390040] |There are lots of resources for cron if you look for them. [131078390050] |This site has several good examples. [131078400010] |Another typical way to start something at boot on many *nix platforms is (or was, I think this may be starting to loose favor -- see alternatives) to put scripts in a directory which, depending on the particular OS/distribution, might be something like /etc/rc2.d, /etc/rc3.d, /etc/rc/rc3.d, or the like (different distributions use different "run levels", which is where the number comes from -- see the link below). [131078400020] |Frequently, these are also symlinked either into or at files from /etc/init.d, for more easy execution by hand, and they take a "start" and/or "stop" argument in most *nix platforms, and also "status", "restart", etc. on many linux platforms. [131078400030] |On such systems, these are generally executed by init, via inittab -- see SysV init scripts. [131078400040] |On *BSD systems, there's a different style of a similar concept, and, as linked above, there are a bunch of variations. [131078400050] |In the above style, scripts in, e.g., /etc/rc2.d (for a system with a default runlevel of 2) typically start with either the letter S or K, and then a two digit number. [131078400060] |The scripts that start with S are run in lexicographic order (which translates, generally, into numeric order) when booting up, into level 2, with an argument of "start". [131078400070] |When shutting down, the scripts prefixed with K are similarly run, with an argument of "stop". [131078400080] |The files in /etc/init.d (or sometimes /etc/rc/init.d, or other variations) are named without the S and K prefixes, or the numeric numbers. [131078400090] |Typically, the files in the various /etc/rc?.d directories symlink to the real files, often referenced via the relative path prefix ../init.d/. [131078400100] |Various utilities exist on various systems to manage these, as well, turning things on and off, etc. [131078400110] |On IRIX (since IRIX 4, at least, if my memory serves), it used to be a tool called chkconfig, which wouldn't manipulate the links, but which would be checked by the scripts to see if they should run or not. [131078400120] |I think IRIX was the first OS to have something like this. [131078400130] |Later, in some version of RedHat that I used to have, there was a tool by the same name, but it behaved a bit differently, actually managing the symlinks -- see chkconfig(8) for what I think is likely the same (or very similar) version as I used then. [131078400140] |On an Ubuntu 9.04 system I have access to, it looks like update-rc.d is the script to run. [131078400150] |If you're on a system that uses inittab, you can also add things directly there -- which can be especially useful for things that you want to run not just once at boot, but to have actively monitored (by init) to respawn if they ever crash or terminate. [131078400160] |See the output of man inittab (if you have it) on your system for additional information. [131078400170] |And/or man init, etc. [131078400180] |There are lots of different flavors, and I'm not (currently) terribly familiar with either Debian or Ubuntu, so I'm not sure exactly what to point you at, but hopefully this gives you some starting points. [131078400190] |the @reboot section in crontob is new to me, but also seems like it might be a useful option -- though I would suggest init scripts as being preferable for many things. [131078400200] |But see man 5 crontab for much more info on what you can put in your cron configuration, and how it can be told to run things, and when (including, assuming a Vixie/ISC version of cron [see cron history] with @reboot). [131078400210] |I hope that's helpful. [131078410010] |Is there an XML file editor for Linux with grid view support? [131078410020] |I have been using Altova XML Editor on Windows for a long time. [131078410030] |It has a wonderful feature that lets you view XML files in "grid view." [131078410040] |This makes life easy when reading very complex XML. [131078410050] |My question is, is there an XML editor available on Linux with a similar feature? [131078410060] |EDIT: I am looking for some free alternative. [131078410070] |Following is an example of grid view: [131078420010] |It's not free, but oXygen has this feature, and runs on all three major platforms. [131078420020] |(It's Java-based.) [131078420030] |They have a screencast demo of the feature. [131078420040] |You can get oXygen in both a standalone version and one that runs in Eclipse, which is nice since you may already be using Eclipse for developing the parts of the system that consume or produce the XML. [131078430010] |I am highly interested in the Stack Exchange concept. [131078430020] |It just makes life so much easier for Unix/Linux Q&A, and has encouraged me to ask a whole bunch of questions I wouldn't have bothered anyways, knowing the level of quality of answers I'd get. [131078430030] |I'd like this to be THE place to go for Unix/Linux Q&A, much like Stack Overflow is THE place for computer programming Q&A. [131078430040] |That's one reason I was excited to reach 2k rep, which allows editing anything I feel I can improve (and that was before this boon). [131078430050] |My little contribution towards that goal is helping improve the quality, which is something I'm deeply interested in. [131078430060] |I have been rather active here and on meta, with a combined activity count of over a thousand items (that surprised me BTW). [131078430070] |There's hundreds more on main Meta. [131078430080] |I go there a lot when I need to suggest something, or complain, or just to go see what's interesting. [131078430090] |I also live on room 26, so I can at least read the title of each question asked on this site. [131078430100] |But I joined it mainly so that I can participate in the discussions there. [sidenote: I only wish more users would participate there, because it makes discussions easier; it's much better than dirtying the main site with needless Comments; oh, and it's much better than traditional IRC]. [131078430110] |There's at least one moderator tool I'd like to make use of, and that is viewing a list of suggested edits by low rep user. [131078430120] |Well, unless that request is taken into consideration, I will have to wait until I have 10k rep to do that (and am only at 2k now). [131078440010] |Is dpkg available for Cygwin? [131078440020] |I don't use Windows all that often, but I've found my self in a position where I'm stuck in front of one quite a bit temporarily. [131078440030] |So I've been investigating Cygwin. [131078440040] |My question is dpkg available for Cygwin? [131078440050] |My Google searches seem to show that it was at least at one point, but I can't find a package. [131078440060] |If it isn't availiable as a package does anyone have any tips/experience getting it running? [131078440070] |I'm not trying to create a Cygwin Debian port (although it sounds as if this was attempted in the past). [131078440080] |At the minimum, I'd simply like to be able to build Debian source packages while on the Windows machine. [131078440090] |(Yes, I know I could ssh over to a Debian box.) [131078440100] |Cygwin is apparently on topic here, but it might not be the best place for this question. [131078440110] |I hope this is ok... [131078450010] |An attempt to get dpkg working has been abandoned, according to THIS sourceforge page that was setup to investigate getting dpkg to work on Windows Cygwin. [131078450020] |Stick to a Virtual Box instance or SSH. [131078450030] |EDIT: If you are really interested, there is a huge thread about trying to get it work here. [131078460010] |How to make functions created in a bash script persist like those in .bashrc? [131078460020] |My .bashrc was getting a little long, so I decided to break it up into smaller files according to topic and then call these files from within .bashrc as so [131078460030] |but, within one of these sub-scripts, I had created a bash function. [131078460040] |Unfortunately it is no longer available to me as it was before I broke everything into chunks. [131078460050] |How do a break my .bashrc file into chunks, but retain access to the variables and functions that I create? [131078470010] |If I understand your question correctly, you need to source or . your files. [131078470020] |For example, within your .bashrc and taking care of order (only you know): [131078470030] |source can be shortened to . on the command line for ease of use - it is exactly the same command. [131078470040] |The effect of source is to effectively inline the script that is called. [131078480010] |What you are doing doesn't work because each bash .topic1rc invokes a new bash shell as a child process, and there's no way to communicate the environment, shell variables, or functions back to the parent process, the bash shell you're interested in. [131078480020] |What you want to do is use the . command (also available as the more verbose source — either one does the same thing). [131078480030] |This reads the given script in the current shell rather than into a new shell. [131078480040] |So: [131078480050] |(It probably works fine for almost all use to omit the path, since the main ~/.bashrc is usually executed when you log in and your home directory is your initial working directory, but it's much better practice to spell it out.) [131078490010] |As others have explained, you are running those scripts in a new shell that dies after they've run. [131078490020] |You need to source your files with either [131078490030] |or [131078490040] |You can also create a slightly more advanced solution that won't require you to edit your .bashrc if you want to add more chunks in the future. [131078490050] |You can put all your configuration files under a directory, say ~/.bash.d, and then have this as the only contents of your .bashrc file: [131078490060] |This way all your bash config files under .bash.d are sourced and you can control the order in which this happens since they will be sourced in alphabetical order of the file name. [131078500010] |Note that when you use the source or . commands, by default in bash, the file argument will be searched for in the $PATH unless it contains a / character. [131078500020] |You want to either [131078500030] |or [131078510010] |Downsides of umask 077? [131078510020] |What are the cons, for having a restrictive umask of 077? [131078510030] |A lot of distros (I believe all, except Red Hat?) have a default umask of 022, configured in /etc/profile. [131078510040] |This seems way too insecure for a non-desktop system, which multiple users are accessing, and security is of concern. [131078510050] |On a related note, on ubuntu, the users' home directories are also created with 755 permissions, and the installer states that this is for making it easier for users to share files. [131078510060] |Assuming that users' are comfortable setting permissions by hand to make files shared, this is not a problem. [131078510070] |What other downsides are there? [131078510080] |(can't add tag umask yet) [131078520010] |The most obvious downside is when you start creating files/directories in a shared directory, expecting other users to access them. [131078520020] |Of course, it's only a matter of not forgetting to set the correct umask before doing stuff that need to be shared by all users. [131078520030] |Another caveat (not really a downside, once you are aware of it) is when you start doing sudo stuff such as installing local programs, ruby gems, python eggs (not OS manage packages obviously), creating configuration files, and so on. [131078520040] |You will get into trouble for the umask is inherited by the sudo session, so only root will be able to access files/dirs you create. sudo can be configured to automatically set the umask the way you want: this question is covered on superuser.com. [131078530010] |022 makes things convenient. [131078530020] |077 makes things less convenient, but depending on the circumstances and usage profile, it might not be any less convenient than having to use sudo. [131078530030] |I would argue that, like sudo, the actual, measurable security benefit you gain from this is negligible compared to the level of pain you inflict on yourself and your users. [131078530040] |As a consultant, I have been scorned for my views on sudo and challenged to break numerous sudo setups, and I have yet to take more than 15 seconds to do so. [131078530050] |Your call. [131078530060] |Knowing about umask is good, but it's just a single Corn Flake in the "complete breakfast". [131078530070] |Maybe you should be asking yourself "Before I go mucking with default configs, the consistency of which will need to be maintained across installs, and which will need to be documented and justified to people who aren't dim-witted, what's this gonna buy me?" [131078530080] |Umask is also a bash built-in that is settable by individual users in their shell initialization files (~/.bash*), so you're not really able to easily enforce the umask. [131078530090] |It's just a default. [131078530100] |In other words, it's not buying you much. [131078540010] |I have this line in my ~/.zshrc [131078540020] |setting it globally is probably not a good idea, but setting it as the default in your rc file is probably not going to hurt or even setting it as the default in the /etc/skel/.rc file. system wide will cause problems though. [131078550010] |Umask would not be appropriate if you are trying to control what other users can see from each other. [131078550020] |However, if you have and work with numerous files that are sensitive to the point that being asked for permission to access them is less bothersome/risky than just letting people see whatever they want, than a umask of 077 would be a good idea. [131078550030] |I have some sensitive files on a file server I manage. [131078550040] |I think setting a restrictive umask then having a periodic script, maybe a cron job to set more specific permissions to items in certain folders would be an ideal solution for me. [131078550050] |When I set this up I will post back here, and let you know how it worked. [131078550060] |@[The guys bashing sudo] Start a new thread for it, it could take several threads of it's own and this thread is about umask. [131078560010] |Linux distribution for the AMD Geode LX 800 (i586) [131078560020] |The manufacturer recommendfs RH9 (ugh) and many of the google search hits come up with comments from 2006/7 which detail xorg problems with the AMD video driver and I'm hoping someone here has had recent experience. [131078560030] |I have two questions: [131078560040] |
  • Which Linux distribution (I'm going use gtk/boost/wxwidgets/cairo) [131078560050] |
  • My project is going to be a GUI which is moderately performance oriented, so is there any value in going minimal with the entire distro, given that the CPU is 500mhz? [131078560060] |I'm thinking that if I can use some manifestation of Ubuntu or some other popular distro with rich repositories, that would probably work out the best for me.
  • [131078560070] |
  • Is there a way to simulate the CPU &chipset in a virtual environment (VMWare/VBox etc) so that I don't have to cross compile? [131078560080] |I know its x86, so I'm assuming simply compiling on any [131078570010] |Debian and Ubuntu support every Intel processor from the 485. [131078570020] |With Debian, the minimal install is very small (~500Mo) and Ubuntu provide the XUbuntu "flavour" dedicated to small configurations. [131078570030] |You can emulate this machine with any virtualisation software able to emulate a i586 CPU (like qemu, but others may be able to do it too). [131078580010] |I recently installed Gentoo on an Alix with an AMD Geode LX processor via the Binhost feature of portag3. [131078580020] |As you say, its a x86 one (i586 to be precise), but several benchmarks are showing that i486 is the better way. [131078580030] |It has a few better results, and almost no lack of features (compared to i586). [131078580040] |Since i havent installed any kind of graphical user interface, i cannot say much about the performance. [131078580050] |Even if you stick to a binary distribution, you should check the kernel features, because there a few special drivers for this cpu (e.g. for hardware accelerated AES). [131078590010] |For the distro, you might want to give Damn Small Linux a go. [131078590020] |If you install it to your harddrive, it's a minimal Debian installation configured to be light on resources. [131078590030] |When it comes to compiling, just compile with -march=geode. [131078590040] |That option is defined on any i386/x86-64 gcc, so no real need to cross-compile. [131078590050] |If you want to run the binary on your compiler host as well (without a recompile), try something like -march=i486 -mtune=geode. [131078590060] |Read more about those options in the GCC docs. [131078600010] |How many entries are created when you make a new directory in *nix? [131078600020] |As per the question, I am thinking mkdir ~/a is either two or three: [131078600030] |
  • 1 entry for the dir it sits in (~/a)
  • [131078600040] |
  • 1 entry for itself (cd a &&ls .)
  • [131078600050] |
  • and/or 1 entry for itself again (cd a &&ls ..)
  • [131078600060] |Could someone clarify if this is two or three? [131078610010] |In an empty directory: [131078610020] |As you can see there are 2 links to an empty directory. [131078610030] |When I create a new one inside it the link count increases to 3. [131078610040] |Additionaly there are 2 links to the new directory. [131078610050] |The total is 3 new links. [131078610060] |This is because every directory has a link to itself (.) and its parent (..) . [131078620010] |In the original Unix implementation, in order to keep the filesystem code inside the kernel simple, directory manipulation programs did some extra work: in particular, mkdir /parent/a created an entry for a in /parent, plus an entry called . in a (pointing to a itself) and an entry called .. in a (pointing to /parent¹). [131078620020] |Pretty soon the code for mkdir and friends moved into the kernel anyway, but the filesystem format kept having explicit . and .. entries, which filesystem traversal code found by name (as opposed to having two special-format pointers in each directory). [131078620030] |Nowadays, some (most?) filesystems fake it: directories don't actually have . and .. entries on the disk, they're generated by the driver. [131078620040] |However, from a user's point of view, this is transparent. [131078620050] |A directory's link count is still two plus the number of subdirectories (the entry in the parent, the directory's own ., and each subdirectory's ..). [131078620060] |In particular, an empty directory has a link count of two (i.e. there are two entries in the filesystem pointing to it), but creating it creates three entries (the third one is .. which points to the parent). [131078620070] |¹ By reference, not by name. [131078620080] |So if you rename /parent, a's .. keeps pointing to a's parent directory, wherever it moves to in the filesystem structure. [131078630010] |Wifi working after running Live Cd [131078630020] |I have been using Ubuntu 10.10, and the major problem was the wifi, which was not working at all. [131078630030] |It worked in Ubuntu 9.10. [131078630040] |So after many futile attempts I decided run a Live CD of Ubuntu 9.10, and then wifi worked. [131078630050] |Then after restarting I was amazed to see that the wifi was now working in ubuntu 10.10. [131078630060] |What could be the reason? [131078630070] |I tried lot of things, like blacklisting ath5k, but nothing worked out. [131078630080] |Now after doing the Live CD, it's working. [131078640010] |Some hardware needs to be initialized before it can work. [131078640020] |The initialization is good until the hardware is turned off, so you can boot to an OS with a fully working driver, then perform a warm reboot to a second OS with a partially-working driver, and use the device in the second OS. [131078640030] |Often the initialization consists of loading the proper firmware. [131078640040] |What may have happened (I know the phenomenon exists but I don't know if your device is an example of it) is that the firmware was illegal to redistribute, and so was excluded from later Ubuntu releases, but is still present in 9.10 (either because you grabbed it before it went away or because no one cared to remove it from the earlier version). [131078640050] |Of course, another possible explanation is a bug in the initialization code in the newer driver. [131078640060] |(Yet another hypothesis is buggy hardware that only works when it's hot enough… That's rare but possible.) [131078650010] |how to find out which wifi driver is installed? [131078650020] |How can I find out which wifi driver is installed and running on my system? [131078650030] |I know about lsmod, but how do I figure out which driver does what? [131078650040] |I am running ubuntu 10.04. [131078660010] |Maybe there's a better way, but I've used lshw -class network (as root) and it gives me this output: [131078660020] |You can grep for driver from that output. [131078660030] |In my case I use lsmod | grep iwlagn, giving me: [131078660040] |Don't ask me what each of those mean :) [131078670010] |In other words, the /sys hierarchy for the device (/sys/class/net/$interface/device) contains a symbolic link to the /sys hierarchy for the driver. [131078670020] |There you'll also find a symbolic link to the /sys hierarchy for the module, if applicable. [131078670030] |This applies to most devices, not just wireless interfaces. [131078680010] |Unix command that takes no arguments [131078680020] |Don't ask why, but is there a Unix command that takes no arguments? [131078690010] |true and false are two examples. [131078690020] |Or did you mean commands that take no options? [131078690030] |I can't think of any, but the original Unix version of echo didn't take any options. [131078690040] |There's even a story about it how it came to take options. [131078700010] |nologin takes no arguments and is widely available on Linux and BSD. [131078700020] |On CentOS 4 and 5, the arch command takes no arguments. [131078700030] |On other linux distros, arch is now provided by GNU coreutils which takes --version and --help. [131078700040] |With CentOS 4 and 5 it comes from util-linux which differs from GNU's version. [131078700050] |A lot of the things in GNU coreutils take only --help and --version, if you look at different Unixes that don't use GNU coreutils, they don't accept any arguments: sync, true, false, whoami, pwd, groups, users [131078710010] |:, true, false, reset, clear, line, chkdupexe and arch all take no arguments. [131078710020] |I'm sure they accept them, but they ignore them. [131078720010] |Compiling code from vim [131078720020] |New to vim and and I want to be able to compile code from within vim without running a new terminal and calling the compiler. [131078720030] |How do I go about doing this? [131078720040] |Note that this requirement is not restricted to gcc only, I sometimes need to call python also on the current script I am working on, so you get the idea... [131078730010] |Write a Makefile, then you can simply: [131078730020] |from inside vim. [131078730030] |And if compilation fails: [131078730040] |will show you warning and errors. [131078730050] |Selecting them will take you to the appropriate file and line. [131078740010] |If you don't like makefiles for some reason you can call any command with :! [131078740020] |For example you can use ":!javac *.java &&java Foo" to compile and run a simple java program. [131078740030] |You can also call other build systems (scons, cmake, ...) this way. [131078750010] |In case of gcc, python etc. you can call it with a colon-bang (:!) [131078750020] |Eg. :!python something.py [131078760010] |In vim, the current file can be referred to as %, so [131078760020] |See :he cmdline-special (or here) [131078770010] |Try the quickrun plugin for Vim. [131078770020] |The page includes a link to github, which is the latest version and which is what I use. [131078770030] |The advantage in using this plugin as opposed to doing :!, the output from the command will be collected and shown in a split window. [131078770040] |Also, by default that plugin will hang your vim instance when you ask it to execute a command, but it can be configured to run the command asynchronously, which is what I do. [131078770050] |Read the documentation for more details. [131078780010] |I use a vim that has Python interpreter compiled in. [131078780020] |I source a python file that has this function: [131078780030] |And map it to a keyboard shortcut: [131078780040] |I had previous set some environment variables to determine the exact terminal to run in if using gvim, or in the same terminal if not in a X. [131078780050] |Then I usually just type ';ri' in a Python buffer to run it (usually to test it). [131078790010] |Extracting tokens from a line of text [131078790020] |Using bash scripting and grep/awk/sed, how can I split a line matching a known pattern with a single character delimiter into an array, e.g. convert token1;token2;token3;token4 into a[0] = token1a[3]=token4 ? [131078810010] |UPDATE Please note that making an array this way is suitable only when IFS is a single non-whitespace character and there are no multiple-consecutive delimiters in the data string. [131078810020] |For a way around this issue, and a similar solution, go to this Unix &Linux question ... (and it is worth the read just to get more of an insight into IFS. [131078810030] |Use bash (and other POSIX shells, e.g. ash, ksh, zsh)'s IFS (Internal Field Seperator). [131078810040] |Using IFS avoids an external call, and it simply allows for embeded spaces. [131078820010] |There are major two approaches. [131078820020] |One is IFS, demonstrated by fred.bear. [131078820030] |This has the advantage of not requiring a separate process, but it can be tricky to get right when your input might have characters that have special meaning to the shell. [131078820040] |The other approach is to use a text processing utility. [131078820050] |Field splitting is built into awk. [131078820060] |Awk is particularly appropriate when processing multiple inputs. [131078830010] |rsync via ssh from linux to windows sbs 2003 protocol mismatch [131078830020] |I am trying to backup my linux webserver to our local windows sbs 2003 server in the office. [131078830030] |I have set up ssh and cwrsync on the windows server and have confirmed that the linux server can reach the windows server via the command: [131078830040] |It asks for a password and connects fine. [131078830050] |However when I run this command to start the backup: [131078830060] |I get this error after entering the password: [131078830070] |protocol version mismatch -- is your shell clean? [131078830080] |and then it dies. [131078830090] |Has anyone got any ideas? [131078840010] |In your /etc/ssh/ssh.conf or if you have your own ssh.conf in your home folder (~/.ssh/...) try to change the protocol version from 2 to 1 or whatever protocol is running on your windows machine. [131078840020] |For more details see man ssh and man ssh.conf [131078850010] |This error message is explained in rsync's FAQ. [131078850020] |It means that some program is writing something when the ssh connection is established (and it breaks rsync). [131078860010] |su and aliases confusion [131078860020] |I create an alias as my current user in the bash shell, which I can see using the alias command. [131078860030] |When I switch user without the -, i.e. su testuser, the alias is not carried into the new user's environment. [131078860040] |Any idea why? [131078870010] |This is because su creates a new shell, starting fresh. [131078870020] |So if you want your alias to persist, you need to create it in your .bashrc. [131078880010] |Thing is, when you create any alias in the terminal, it is temporary. [131078880020] |If you open another terminal while you are logged in as the same user, you won't be able to access that alias. [131078880030] |So, you need to store them permanently, as said by asoundmove, in .bashrc file. [131078880040] |You can store it in any other file also, but then that has to be included in your .bashrc file. [131078890010] |From man su: [131078890020] |As asoundmove and Barun explained, su starts a new shell, so without the - it is like running bash --norc. [131078900010] |I'm an existing Pro Tem Moderator, and currently one of the top 5 users on the site by reputation. [131078900020] |My primary goal as protem has been to define what is and is not ontopic. [131078900030] |One such endless slew of questions in the beginning was cross-platform application's, which I've argued are ontopic. [131078900040] |Now that we've launched I'm trying to be less of a shaper and see in general what the community thinks. [131078900050] |Of the existing Pro Tem moderator's I might be the most likely to have an opinion on whether something should be ontopic, and the least likely to reword your post. [131078900060] |Remember to vote, and remember to vote to close, and flag for moderator attention to migrate. [131078910010] |Write to a file without redirection? [131078910020] |Hello! [131078910030] |I am writing a regular compiled application that needs to create a special file and write a magic cookie into it. [131078910040] |I can’t write the file directly from the application, the system security model requires me to launch a helper tool with elevated privileges to do the trick. [131078910050] |I can supply any number of arguments to the helper tool. [131078910060] |Now I would like to pick some very simple system command that would serve as the helper tool and create the file for me. [131078910070] |Something like this: [131078910080] |/bin/sh -c "/bin/echo -n 'magic' >/some/where/file" [131078910090] |Simple touch does not cut it as I need to write the cookie into the file, a simple echo without the shell wrapper does not work as it needs redirection to write the file. [131078910100] |I don’t feel comfortable calling the shell with root privileges to do such a trivial task. [131078910110] |Is there some really simple, constrained system command that I could call to write the file for me? [131078920010] |How about this: [131078920020] |Sure there are redirections in this but only tee runs as root not a shell. [131078920030] |Works with dd of=... too. [131078930010] |There's another consideration, which is that you don't want to put the value of the magic cookie on a command line, since that can be observed by other users. [131078930020] |Even if the program is short-lived (including if the program zeros out the command line string), there is the opportunity for attack. [131078930030] |So, a theoretical: [131078930040] |is a dangerous approach. [131078930050] |Therefore, I endorse @stribika's suggestion: write the value to a temporary file and copy it into place. [131078930060] |Make sure to use a secure function to create the temporary file (mkstemp()) so that there's not a race condition there as well. [131078940010] |Customized XDM-based login screen [131078940020] |I am in the process of creating a lightweight yet full-featured Linux disto and don't want to include a whole bunch of unnecessary or bloated software. [131078940030] |It is supposed to run off a USB flash drive. [131078940040] |I want to have it be somewhat user-friendly, hence the necessity of a graphical login. [131078940050] |I really hope it can be xdm (a 115 KB package) and not kdm (part of a 64 MB package). [131078940060] |I realize that much of the user-friendliness comes from having a visually-pleasing graphical interface, and a lot of effort thus far has been put into art direction. [131078940070] |Much of the user interface (fluxbox) has been heavily customized to reflect the art and themes of the disto. [131078940080] |My questions are: [131078940090] |
  • How far can the look and feel of xdm(1) be customized?
  • [131078940100] |
  • Have you tried or seen the work of others who have done it?
  • [131078940110] |
  • Can you show me an example?
  • [131078940120] |
  • If it's unfeasible to expect that much out of xdm, is there another graphical login client that is equally as lightweight?
  • [131078950010] |You can definitely customize xdm. [131078950020] |This is documented in the (quite thorough) xdm man page. [131078950030] |Some things are customized via X resources (which, long ago, was the normal way to configure X apps — you can use a similar thing to customize xterm), generally in the file /etc/X11/xdm/Xresources. [131078950040] |Others things — like setting a background image — are controlled through the Xsetup script. [131078950050] |But you might also want to look at LXDM, which is another lightweight and modern (gtk) X display manager, but without all the dependencies of gdm or kdm. [131078950060] |In fact, you might want to look at LXDE for your project in general — it's designed to be a very lightweight desktop environment (built around openbox as a window manager). [131078950070] |There's also a display manager called SLiM, which also aims at being lightweight and themeable. [131078950080] |I haven't used that one, though, so I can't vouch for it. [131078960010] |Multiple Users on a Desktop Environment [131078960020] |This is probably a REALLY stupid question. [131078960030] |But anyways, lets pretend we had a *nix rather powerful system... [131078960040] |Now Obviously I know you can set up multiple users to login to a system.......but how exactly do you do that? [131078960050] |Like....how would all the monitors connect and such, or would you need a smaller computer node that like....reroutes it or something? [131078960060] |I know that probably sounds dumb...but how do System Admins and such set up multiple users for a *nix system? across a large building or something? [131078970010] |Generally, one runs a server with no actual graphical display attached to it (maybe a very simple one for diagnostic work). [131078970020] |Clients connect via a network protocol, either X tunneled over SSH or a remote-desktop protocol like VNC or RDP. [131078970030] |With the former, users execute GUI programs from the remote shell and they show up seamlessly as windows on their client systems. [131078970040] |This works well on high-speed networks as long as the graphics aren't intensive, but unfortunately the X protocol is very chatty and not highly efficient. [131078970050] |It also requires each client to run an X server, which is automatic on Linux clients, easy on Mac OS, and somewhat cumbersome on Windows. [131078970060] |The other approach is to use VNC or RDP, which run an entire remote desktop session displayed as a window on the client. [131078970070] |The actual work is done on the server and a compressed graphics stream delivered to the client program. [131078970080] |There's also an in-between option called NX, which uses an optimized version of the X protocol to deliver a similar experience (with some performance improvements over VNC or RDP.) [131078970090] |For these approaches, client programs are available for any major (and many minor) operating systems. [131078970100] |There is another entire way to go, though, which matches more what you are imaging: a ginormous octopus-like system extending direct graphical connections from a central server around a small area (or even a whole building). [131078970110] |This is known as "Multiseat X", and you can read more about doing that in this article from x.org. [131078970120] |The links from there indicate that there's enough interest in doing this to keep the idea alive, although I've never actually seen anyone doing it in my direct experience. [131078980010] |If you have one central server and many client machines, SSH and X11 forwarding is a very good method of accomplishing this. [131078980020] |If you're just talking about having one machine with many monitors, keyboards, and mice this is called "Multiseat". [131078980030] |I believe with recent X.org versions this is no longer possible, but I believe they're trying to bring it back. [131078980040] |Here are a couple of links for you. [131078980050] |And now that you know it's called Multiseat you can Google around for more information. http://en.wikipedia.org/wiki/Multiseat_configuration#GNU.2FLinux http://wiki.x.org/wiki/Development/Documentation/Multiseat [131078990010] |Method no. 1. [131078990020] |It is possible to set up the diskless stations - nothing expensive - it have to simply run only a X server preferably with 2D acceleration (3D nowadays). [131078990030] |On startup it gets a image from server, starts X login screen that present logging on server. [131078990040] |The applications are run on server but they are displayed on thin client. [131078990050] |To mess things up it means that X clients are run on server while X server is run on client. [131078990060] |The exact details varies from diskless set up to set up but there are some pre-packaged tools to do this. [131078990070] |It can be built using even second-hand clients (they do nothing except displaying polygons) as long as network and server can handle them. [131078990080] |Method no. 2. X can handle multiple cards and multiple inputs (multiseat). [131078990090] |It can also be restricted to only selected screen and/or input. [131078990100] |You may start X server configured to only use mouse1, keyboard1and monitor1, then another that use mouse2, keyboard2 and monitor2 etc. [131078990110] |However as some cards does not handle the there is Xephyr which does the same but within one X server. [131079000010] |Another answer is LDAP. [131079000020] |You can configure a domain as a centralized storage for all users' profiles. [131079000030] |How it is done in Debian. [131079010010] |What are some common tools for intrusion detection? [131079010020] |Please give a brief description for each tool. [131079020010] |Snort [131079020020] |From their about page: [131079020030] |Originally released in 1998 by Sourcefire founder and CTO Martin Roesch, Snort is a free, open source network intrusion detection and prevention system capable of performing real-time traffic analysis and packet logging on IP networks. [131079020040] |Initially called a “lightweight” intrusion detection technology, Snort has evolved into a mature, feature-rich IPS technology that has become the de facto standard in intrusion detection and prevention. [131079020050] |With nearly 4 million downloads and approximately 300,000 registered users Snort, it is the most widely deployed intrusion prevention technology in the world. [131079030010] |Tripwire [131079030020] |Is an open source (though there's a closed source version) integrity checker that uses hashes to detect file modifications left behind by intruders. [131079040010] |Why don’t you check http://sectools.org/ [131079050010] |DenyHosts for SSH server. [131079060010] |OpenBSD has mtree(8): http://www.openbsd.org/cgi-bin/man.cgi?query=mtree It checks whether any files have changed in a given directory hierarchy. [131079070010] |Logcheck is a simple utility which is designed to allow a system administrator to view the logfiles which are produced upon hosts under their control. [131079070020] |It does this by mailing summaries of the logfiles to them, after first filtering out "normal" entries. [131079070030] |Normal entries are entries which match one of the many included regular expression files contain in the database. [131079070040] |You should watch your logs as one part of a healthy security routine. [131079070050] |It'll also help trap a lot of other (hardware, auth, load...) anomalies. [131079080010] |Keymapping problem when working with emacs and openbox [131079080020] |I have an apple keyboard and I had to do some remapping of the keys to make the mod-4 key the first key to the left of the space bar for when working with Emacs. [131079080030] |The below script worked fine when I was using the dwm window manager, but after switching to openbox I have found that instead of swapping keycodes between the option and command keys, both of the keys are doing the same thing. [131079080040] |One odd thing I noticed, was on the new setup when I click run showkey and press the option and command keys I get 56 and 125 respectively, but these keys don't work at all when inserting them into the below script instead of the 64 and 133. [131079080050] |I must admit I created the script below by continually tweaking it until it worked so there could be a much better way of doing it. [131079080060] |

    .xmodmap

    [131079090010] |How to check how a long a program has been running? [131079090020] |Is there a way to check how long a program has been running, short of running it from a monitoring app? [131079100010] |ps -p $$ -o etime= [131079100020] |Where $$ is the PID of the process you want to check. [131079100030] |This will return the elapsed time in the format [[dd-]hh:]mm:ss. [131079100040] |The ps program gets this from /proc/$$/stat, where one of the fields (see man proc) is process start time. [131079100050] |This is, unfortunately, specified to be the time in jiffies (an arbitrary time counter used in the Linux kernel) since the system boot. [131079100060] |So you have to determine the time at which the system booted (from /proc/stat), the number of jiffies per second on this system (which turns out to be hard to determine, although some safe guesses can be made as long as you're not worried about keeping it working in the future), and then do the math to get the elapsed time in a useful format. [131079100070] |So it's convenient that ps does that all for you. :) [131079110010] |ps takes a -o option to specify the output format, and one of the available columns is etime. [131079110020] |According to the man page: [131079110030] |etime - elapsed time since the process was started, in the form [[dd-]hh:]mm:ss. [131079110040] |Thus you can run this to get the PID and elapsed time of every process: [131079110050] |If you want the elapsed time of a particular PID (e.g. 12345), you can do something like: [131079110060] |(Edit: Turns out there's a shorter syntax for the above command; see mattdm's answer) [131079120010] |Portable: [131079120020] |i.e. that shell was started on January 30 and totaled about 6 seconds of CPU time. [131079120030] |There may be more precise or more parseable but less portable ways to get this information. [131079120040] |Check the documentation of your ps command or your proc filesystem. [131079120050] |Under Linux, this information lives in /proc/$pid/stat. [131079120060] |The CPU time is in jiffies; I don't know offhand how to find the jiffy value from the shell. [131079120070] |The start time is relative to the boot time (found in /proc/uptime). [131079130010] |If you can run time and then execute a command you will get exactly what you are looking for. [131079130020] |You cannot do this against an already-running command. [131079130030] |[0] % time sleep 20 [131079130040] |sleep 20 0.00s user 0.00s system 0% cpu 20.014 total [131079140010] |Binary compatibility between Mac OS X and Linux [131079140020] |Brace yourselves, this question will likely appear naive and/or foolish, seeing as I am relatively new to the inner workings of unix like systems, and programming in general. [131079140030] |Ready? [131079140040] |Ok! [131079140050] |I will go through about 3 levels of ludicrosity, increasing as I go along. [131079140060] |We have two systems with similar hardware (main point being the processor, let us say a standard intel core 2 duo). [131079140070] |One is running (insert your linux distro here: Ubuntu will be used henceforth), and the other is running let's say Mac OS X. [131079140080] |One compiles an equivalent program, Let us say something like: [131079140090] |The code is extremely simple, because I don't want to consider the implications of shared libraries yet. [131079140100] |When compiled on the respective systems. [131079140110] |Is not the main difference between the output a matter of ELF vs Mach-O? [131079140120] |If one were to strip each binary of the formatting, leaving a flat binary, wouldn't the disassembled machine instructions be the same? (with perhaps a few differences depending on the compilers habits/tendencies). [131079140130] |1.) [131079140140] |If one were to develop a program to repackage the flat binary produced from our Ubuntu system, in the Mach-O formatting, would it run in the Mac OS X system? [131079140150] |Then, if one only had the compiled binary of the supposed program above, and one had this mystical tool for repackaging flat binaries, would simple programs be able to run on the Mac OS X system? [131079140160] |Now let us take it a bit further. [131079140170] |We now have a program with source such as: [131079140180] |2.)Assuming this program is compiled and statically linked, would our magical program still be able to repackage the raw binary in the Mach-O format and have it work on mac os X? [131079140190] |Seeing as it would not need to rely on any other binaries, (for which the mac system would not have in this case) [131079140200] |And now for the final level; [131079140210] |3.)What if we used this supposed program to convert all of the necessary shared libraries to the Mach-O format, and then instead compiled the program above with dynamic linking. [131079140220] |Would the program still succeed to run? [131079140230] |That should be it for now, obviously each step of absurdity relies on the previous base, to even make sense. so If the very first pillar gets destroyed, I doubt there would be much merit to the remaining tiers. [131079140240] |I definitely would not even go as far as to think of this with programs with GUI's in mind. [131079140250] |Windowing systems would likely be a whole other headache. [131079140260] |I am only considering command line programs at this stage. [131079140270] |Now, I invite the world to correct me,and tell me everything that is wrong with my absurd line of thinking. [131079150010] |You forget one crucial thing, namely that your program will have to interact with the operating system to do anything interesting. [131079150020] |The conventions are different between Linux and OS X so the same binary cannot run as-is without essentially having a chunk of operating system dependent code to be able to interact with it. [131079150030] |Many of these things are hidden away in libraries, which you then need to link in, and that means your program needs to be linkable, and linking is also different between the two systems. [131079150040] |And so it goes on and on. [131079150050] |What on the surface sounds like doing the same thing is very different in the actual details. [131079160010] |This is doable if someone wants to spend enough time to make it happen. [131079160020] |No one has, yet. [131079160030] |It's been done before on other platforms: [131079160040] |
  • Solaris and UnixWare include a helper program called lxrun, which works something like sudo: you pass your executable name and parameters to a helper program and it fixes things up dynamically so that the executable can talk to the OS. [131079160050] |The official site says it's bitrotted.
  • [131079160060] |
  • Linux's kernel once had a feature called iBCS that did the reverse, except that it didn't need a helper because the kernel recognized the "foreign" binaries directly. [131079160070] |It died with kernel 2.2, most likely because the small Unix server battle was essentially over once 2.4 came out.
  • [131079160080] |
  • FreeBSD's kernel can be configured to recognize Linux binaries and run them as if they were native. [131079160090] |FreeBSD being less popular than Linux, this feature appears to be in better shape than the above two.
  • [131079160100] |OS X has a lot of FreeBSD in it so porting its Linux support might be possible. [131079170010] |On Stack Exchange, we believe the core moderators should come from the community, and be elected by the community itself through popular vote. [131079170020] |We hold regular elections to determine who these community moderators will be. [131079170030] |Community moderators are accorded the highest level of privilege on our community, and should themselves be exemplars of positive behavior and leaders within the community. [131079170040] |Our general criteria for moderators is as follows: [131079170050] |
  • patient and fair
  • [131079170060] |
  • leads by example
  • [131079170070] |
  • shows respect for their fellow community members in their actions and words
  • [131079170080] |
  • open to some light but firm moderation to keep the community on track and resolve (hopefully) uncommon disputes and exceptions
  • [131079170090] |Every election has three phases: [131079170100] |
  • Nomination
  • [131079170110] |
  • Primary
  • [131079170120] |
  • Election
  • [131079170130] |You can read more about the election process on the blog, and ask any questions on meta. [131079170140] |If you're looking for a ton of detail and statistics on all the nominees, see the election page put together by community member Yi Jiang. [131079170150] |If you're looking for the candidates' opinions, a town hall chat was held partway through the election; see the digest put together by community member Josh. [131079170160] |Please participate in the moderator elections by voting, and perhaps even by nominating yourself to be a community moderator! [131079180010] |Odd problem regarding 'apt-get update' [131079180020] |I was having some serious network troubles, and now running sudo apt-get update gives this error: [131079180030] |It's an odd one because I've never seen it before. [131079180040] |Have you experienced it? [131079180050] |How can I fix it? [131079190010] |The file was corrupted somewhere. [131079190020] |I thought running apt-get update would fix it. [131079190030] |If it doesn't, remove the file (sudo rm /var/lib/apt/lists/ftp.is.co.za_debian_dists_testing_main_binary-i386_Packages) and try apt-get update. [131079190040] |If that still downloads a broken file, there may be an invalid entry in a cache somewhere between you and the server. [131079190050] |Try using a different Debian mirror for a couple of days. [131079200010] |Steps to fix: [131079200020] |
  • Disable Testing suite
  • [131079200030] |
  • sudo apt-get update
  • [131079200040] |
  • Re-enable Testing suite
  • [131079200050] |
  • sudo apt-get update
  • [131079210010] |best way to set up separate linux environment in ~ [131079210020] |I do most of my work (involves a lot of C/Python) on a development server that is shared with several other people. [131079210030] |As a result we all seem to do a bit of the system administration chores (there is no sysadmin). [131079210040] |This tends to work alright, but installing and maintaining packages and libraries tends to be messy. [131079210050] |Lately I've found myself installing and building more and more packages etc in my home directory. [131079210060] |What is the best way to formalize/streamline this process? [131079210070] |Right now I am merely ./configuring with --prefix, setting my path so that my ~/usr/bin comes before usr/bin, etc, and trying to set LD_LIBRARY_PATH and C_INCLUDE_PATH and `PYTHONPATH properly, but this is becoming error-prone and painful. [131079210080] |Is there a more "automated" method? [131079220010] |Are you able to use a package manager, such as pacman (arch linux), emerge (gentoo), apt-get (Debian-based - such as Ubuntu), yum (RHEL)? [131079220020] |IF these are custom installs that require specific version, then you should be installing them system-wide in /usr/bin and running them as specific user (non privileged). [131079230010] |For simple package management, you can use stow. [131079230020] |Install each package in a separate directory (e.g. ~/packages/stow) and stow automatically maintains a combined hierarchy of symbolic links (e.g. ~/packages/bin/pydoc -> ~/packages/stow/python/bin/pydoc). [131079230030] |Also consider xstow, a more powerful program around the same basic principle. [131079240010] |Grub does not list windows after crunchbang installation [131079240020] |I have a couple of years of experience with ubuntu, but this is my first time with a non-ubuntu based distribution. [131079240030] |I managed to successfully install Crunchbang from a bootable usb stick, created with unetbootin (my hom laptop's cd drive is dead) and at the end of installation, it asked me whether I wanted to install grub. [131079240040] |It said that it detected the windows operating system that was already there and that it should be fine if all my operating systems are listed. [131079240050] |So, I let it install grub and now when I boot into my machine only a couple of Crunchbang listings appear in the grub boot menu. [131079240060] |My windows has become inaccessible. [131079240070] |With the little experience I had, I tried to look for the menu.lst file which I expect to list the entries that would be shown in the grub boot menu. [131079240080] |But I couldn't find that file. [131079240090] |Perhaps crunchbang puts it in a different location? [131079240100] |I want to get my windows be listed in the grub boot menu. [131079240110] |Any ideas? [131079240120] |Edit: From a comment on the question Editing grub menu I came to know the location of the boot menus, as /boot/grub/grub.cfg and that file exists on my crunchbang system. [131079240130] |I now need to know how and what to add to it to get my windows. [131079240140] |Edit 2: From the same question as above, I learned to do sudo update-grub which did put my windows entry in my grub boot listing and everything is fine. [131079240150] |But I would still like to know... why wasn't it there initially when crunchbang did detect it and said that it would be added. [131079240160] |Thanks. [131079250010] |Your question seemed to be solved already, I just want to add some information: [131079250020] |
  • You could not find the file menu.lst because it belongs to Grub Legacy. [131079250030] |A lot of distros are switching to Grub 2 as the default boot loader, including Crunchbang.
  • [131079250040] |
  • grub.cfg is there, but it isn't meant to be edited. [131079250050] |The proper way to do it is to edit the files in /etc/grub.d/ then run grub-mkconfig.
  • [131079250060] |
  • Some distros (I think Debian based) have a script named update-grub that can probe your system, populate /etc/grub.d/ and run grub-mkconfig in one shot.
  • [131079250070] |
  • I agree with xenoterracide about the possibility of a bug in the installer. [131079250080] |If you can reproduce it then consider submitting a bug report so the Crunchbang developers can fix it.
  • [131079260010] |As others have commented, the solution is to run sudo update-grub from Crunchbang and the next time you boot, windows should be present in the grub boot list. [131079270010] |setting up mail system [131079270020] |So I'm still kind of new to linux, but what are the steps towards setting a linux box up in such a way that it can send mail using shell? [131079270030] |I mean, I've done the necessary sudo apt-get install mailsystem (or something like that) which sets up the mail command. [131079270040] |However, will you also have to set up a .com to point to your linux box as its SMTP server? [131079270050] |What else needs to be done? [131079280010] |I'm assuming that you are using a Debian based derivative given that you mentioned apt-get in your question. [131079280020] |This can be done fairly simply using the exim4 mail package. [131079280030] |A simple [131079280040] |Will install everything that you need to send mail via smtp. [131079280050] |Note you need to be root or use sudo for the apt-get command to work. [131079280060] |During the install the exim4-config package setup will ask you a number of questions that will let you configure things appropriately. [131079280070] |You should probably select the mail sent by smarthost; no local mail option and give it details of your outgoing mail provider. [131079280080] |It is also possible to send mail directly as an internet site; mail is sent and received directly using SMTP but that can have issues with your internet providers so you are best to start off using a smarthost. [131079280090] |If you need to tweak your configuration you can use [131079280100] |I'm guessing that your outgoing mail provider will require your machine to authenticate before it allows sending of mail. [131079280110] |In that case you need to add an entry into /etc/exim4/passwd.client . [131079280120] |The format is quite simple each takes the format of servername:login:password and is documented in [131079280130] |The latter also tells you how to configure other settings files. [131079280140] |It will likely be of interest to setup /etc/email-addresses to ensure that when you send email as a specific user it gets sent via your ISP as your real outgoing address [131079290010] |I don't know what debian uses as /usr/bin/mail, but try to get a mail client that can show you the SMTP dialog that it goes through. [131079290020] |I had "nail" (apparently now "mailx" - http://heirloom.sourceforge.net/mailx.html) on one of my linux boxes: [131079290030] |The "-v" flag to nail (mailx) cause it to show you the SMTP dialog. [131079290040] |That output can be invaluable when troubleshooting. [131079300010] |Splitting stdin to different outputs in bash [131079300020] |Possible Duplicate: Is there a way in bash to redirect output and still have it go to stdout? [131079300030] |How would I send /dev/stdin to both /dev/stdout and a $logfile in one command? [131079300040] |One solution would be just to do it once for both: [131079300050] |But that seems bad. [131079300060] |Is there a better way? [131079310010] |The command tee is exactly doing this : [131079320010] |Debugging Linux machine freezes [131079320020] |I have 15 identical Linux RH 4.7 64-bit severs. [131079320030] |They run cluster database (cluster is application level). On occasion (every month or so) a random box (never the same though) freezes. [131079320040] |I can ping the box and ping works. [131079320050] |If I try to ssh in the box I get: [131079320060] |SSH is set up properly. [131079320070] |When I go to the server room, and try to login directly to console, I can switch consoles with Alt+Fn, I can enter a username, and characters do show, but after pressing Enter, nothing happens. [131079320080] |I waited 8 hours once and it didn't change. [131079320090] |I set up syslog to log everything to a remote host, and there is nothing in those logs. [131079320100] |When I reboot the machine, it works without a problem. [131079320110] |I have run HW tests - everything is ok, and nothing is in the logs. [131079320120] |The machines are also monitored with NAGIOS, and there is no unusual load or activity prior to freeze. [131079320130] |I have run out of ideas; what else can I do or check? [131079330010] |The only time I've seen anything similar was where a KVM switch was used and a keyboard hot-key (e.g. alt+n) was used to switch between servers. [131079330020] |It didn't happen every time and it was the server being switched away from that was affected - so it wasn't immediately noticeable. [131079330030] |No lock-ups would occur if a physical button on the KVM switch itself was used to switch between servers. [131079330040] |If the hot-key was often used, occasionally a server would not allow new logins. [131079330050] |Existing SSH sessions were unaffected. [131079340010] |It sounds like your kernel panicked in some way such that sshd couldn't send the server keys. [131079340020] |Possibly, the kernel was wedged in such a way that the network stack was still up, but the vfs layer was unavailable. [131079340030] |When I experienced similar problems on a RHEL4 system, I set up the netdump and netconsole services, and a dedicated netdump and syslog server to catch the crash dumps and kernel panic information. [131079340040] |I also set the kernel.panic sysctl to 10. [131079340050] |That way, when a system panics, you get both the kernel trace and a copy of the memory on that system, to which you could analyse with the 'crash' utility. [131079340060] |You would certainly also benefit from setting up a serial console for the hosts, so you could see the console out put and potentially hit the magic sysrq keys. [131079340070] |Also, if you're willing to set up the networking and you have hardware that supports it, you can use IPMI to remotely poweroff,poweron,restart, and query the hardware. [131079340080] |(for what it's worth, RHEL5 has a similar functionality with kexec/kdump, only the crash dump is stored locally) [131079350010] |I will bet dollars to donuts that you are running out of memory. [131079350020] |The system is grinding to a halt as it tries to figure out where to get some from. [131079350030] |It may be happening so quickly that your monitoring doesn't catch it. [131079350040] |I'd step up monitoring, including remote logging of memory usage. [131079350050] |Check in the logs for OOM messages as well. [131079350060] |(You may even just want to have some ssh windows open running top.) [131079360010] |To me this sounds like the system is out of resources so the process needed by the server side of ssh cannot be allocated. [131079360020] |The actual bottleneck can vary - out of processes or out of memory - and the only way to be sure is to look at the logs and console to see if anything is present there. [131079360030] |You may want to set up a scenario of pre-started ssh-jobs - one to each machine - simply to be prepared next time it happens. [131079360040] |If it is really bad, then you may want to consider starting another shell with more built-in commands so you can investigate more without having to start an extra process as this may not be possible. [131079360050] |Also "tail -f /var/log/*" may be very useful. [131079360060] |Good luck. [131079370010] |Can IFS (Internal Field Separator) function as a single seperator for multiple consecutive delim chars? [131079370020] |Parsing an array using IFS with non-whites space values creates empty elements. [131079370030] |Even using tr -s to shrink multiple delims to a single delim isn't enough. [131079370040] |An example may explain the issue more clearly.. [131079370050] |Is there a way to achieve "normal" results via a tweaking of IFS (is there an associated setting to change IFS's behaviour? .... ie. [131079370060] |To act the same as the default whitespace IFS. [131079370070] |Here is the output [131079380010] |From bash manpage : [131079380020] |Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. [131079380030] |A sequence of IFS whitespace characters is also treated as a delimiter. [131079380040] |It means that IFS whitespace (space, tab and newline) is not treated like the other separators. [131079380050] |If you want to get exactly the same behaviour with an alternative separator, you can do some separator swapping with the help of tr or sed : [131079380060] |The %#%#%#%#% thing is a magic value to replace the possible spaces inside the fields, it is expected to be "unique" (or very unlinkely). [131079380070] |If you are sure that no space will ever be in the fields, just drop this part). [131079390010] |You can do it with gawk too, but it's not pretty: [131079390020] |outputs [131079400010] |As bash IFS does not provide an in-house way to treat consecutive delimiter chars as a single delimiter (for non-whitespace delimiters), I have put together an all bash version (vs.using an external call eg. tr, awk, sed) [131079400020] |It can handle mult-char IFS.. [131079400030] |Here are its execution-time resu;ts, along with similar tests for the tr and awk options shown on this Q/A page... [131079400040] |The tests are based on 10000 itterations of just building the arrray (with no I/O )... [131079400050] |Here is the output [131079400060] |Here is the script [131079410010] |Debian - su vs su - [131079410020] |I know what should be the difference between su and su -, but in my system (debian testing) for example PATH is the same: [131079410030] |So I'm starting to thinki that the difference can be changed in configuration files. [131079420010] |Parameter - means starting environment which is almost the same as with login environment for that user. [131079420020] |Without - environment is same as original user's environment. [131079420030] |For example PATH is usually same for root and normal users. [131079420040] |In some systems there is no sbin folders for normal users. [131079420050] |You can't disable - from su easily. [131079420060] |Of course you can go to edit the source code. [131079420070] |You can try this by running [131079420080] |In first time echo $FOO prints "bar" and in second time it's empty. [131079430010] |For configuring the su PATH, have a look at /etc/login.defs: [131079430020] |There are also a number of other places PATH can be changed, including: [131079430030] |
  • /etc/environment
  • [131079430040] |
  • /etc/bash.bashrc
  • [131079430050] |
  • /etc/profile
  • [131079430060] |
  • /etc/profile.d/*
  • [131079430070] |
  • ~/.bashrc
  • [131079430080] |
  • ~/.bash_profile
  • [131079430090] |Without anything special in per-user settings, su seems to be getting its PATH from /etc/environment and su - seems to be getting its environment from /etc/login.defs ENV_SUPATH. [131079430100] |So on your system, my guess is that you have the same PATH value in /etc/login.defs as in /etc/environment, or you have some extra configuration in /etc/profile.d, /etc/bash.bashrc, or some rc file in /home/someuser. [131079440010] |How can I find out what my domain is for connecting with samba? [131079440020] |I am attempting to mount a password protected network share from a NAS unit that works by appearing as a windows share. [131079440030] |On Windows, I just Map Network Drive, and enter in \\x.x.x.x\ShareName, then enter a password at the prompt. [131079440040] |On my Linux system, when I attempt to open smb://x.x.x.x/ServerName, I get a prompt for password and domain. [131079440050] |As far as I know, since my share is set up as a workgroup, I do not have a domain. [131079440060] |What should I enter here or do to mount this share? [131079450010] |Some answerer provided the right answer yesterday but deleted it, which I only saw in my inbox, so I can't tell who they are. [131079450020] |The correct answer was to use the hostname of the NAS unit as the domain. [131079460010] |sticking stickies to windows [131079460020] |I'm looking for a sticky notes/knotes/etc type program with one difference: I can stick a note on a window, rather than the desk top, and it will remain attached to the window. [131079460030] |Basically, I want to be able to annotate windows. [131079460040] |I want to be able to say "this xterm is the one where I'm doing project X" "This application is running test Y" "this emacs is working on this bug" etc. [131079460050] |I'm not even sure how to search for that, but my google-fu is weak to begin with... [131079460060] |thanks. [131079470010] |Sun's Project Looking Glass had this feature (or something very close to it) but is now more or less dead, sadly. [131079470020] |One possible way to achieve a similar result would be to use a window manager that allows you to tab windows together (Fluxbox comes to mind), and tab a text editor or other notes app to each window, and use that for your notes. [131079480010] |A short tutorial on how a linux distro is organized and supposed to work [131079480020] |I need a tutorial , preferably with pics/diagrams to get oriented to how a typical Linux distro. is organized and used (including the basic software installation work flow). [131079480030] |It shouldn't be a 500 pages book. [131079480040] |A concise 20-30 pages tutorial is what I am looking for , so that I read it over the weekend and just dive into Ubuntu/Fedora etc and figure my way around on my own. [131079490010] |You could try the Ultimate Linux Newbie Guide videos or read the Linux.org beginner guide [131079490020] |But to be honest, if you are going for something like Ubuntu you will find it very easy, and if you don't, there is a stack of info over on askubuntu.com, including this question which should have what you'll need. [131079500010] |Simple Shell Script with Arithmetic issue... ** is giving me trouble. [131079500020] |When I run this script I get this error: [131079500030] |./myscript.sh: 16: arithmetic expression: expecting primary: "1 ** 1" [131079500040] |When I run this shell script with bash, as in #! /bin/bash on the first line, the math works properly; unfortunately I need to use /bin/sh. [131079500050] |What am I doing wrong? [131079500060] |I'm on Linux Mint if that matters. [131079510010] |Standard shell arithmetic only allows integer arithmetic operations. [131079510020] |This doesn't include ** for exponentiation, which bash has as an extension. [131079510030] |Integer exponentiation is easy enough to implement as a shell function (though you'll run into wraparound soon). [131079510040] |As an aside, why use expr here? [131079510050] |Shell arithmetic can do addition. [131079520010] |I think you're out of luck, as the ** exponent operator isn't standard for /bin/sh. [131079520020] |You can use bc, though: echo "$y ^ $x" | bc. [131079530010] |The POSIX shell apparently does not have an exponentiation operator. [131079530020] |You can roll your own: [131079540010] |Difference between /usr/include/sys and /usr/include/linux? [131079540020] |Well, obviously there is a difference, but I'm curious about the rational behind why some things go under /usr/include/sys and others go under /usr/include/linux, and have the same header file name? [131079540030] |Does this have something to do with POSIX vx non-POSIX? [131079540040] |Also, I've managed to populate /usr/include/linux with headers on my Fedora system by grabbing a the kernel-headers package, is there a standard package name for me to get header files that go under /usr/include/sys? [131079540050] |I haven't been able to find it. [131079550010] |The headers under /usr/include/linux and under /usr/include/asm* are distributed with the Linux kernel. [131079550020] |The other headers (/usr/include/sys/*.h, /usr/include/bits/*.h, and many more) are distributed with the C library (the GNU C library, also known as glibc, on all non-embedded Linux systems). [131079550030] |There's a little explanation in the glibc manual. [131079550040] |Note that /usr/include/linux and /usr/include/asm should contain the headers that were used when compiling the C library, not the headers from the running kernel. [131079550050] |Otherwise, if some constants or data structures changed, there will be an inconsistency between the compiled program and the C library, which is likely to result in a crash or worse. [131079550060] |(If the headers match the C library but the C library doesn't match the kernel, what actually happens is that the kernel is designed to keep a stable ABI and must detect that it's called under a different ABI and interpret syscall arguments accordingly. [131079550070] |The kernel must do this for statically compiled programs anyway.) [131079550080] |I remember a heated debate between Debian and Red Hat a while (a decade?) ago on the /usr/include/linux issue; apparently each side is sticking to its position. [131079550090] |(As far as I understand it, Debian is right, as explained above.) [131079550100] |Debian currently distributes /usr/include/linux and friends in the linux-libc-dev package, which is compiled from kernel sources but not upgraded with the kernel. [131079550110] |Kernel headers are in version-specific packages providing the linux-headers-2.6 metapackage; this is what you need to compile a module for a particular kernel version. [131079550120] |The package you're looking for is the C library headers. [131079550130] |I don't know what it's called, but you can find out with yum provides /usr/include/sys/types.h. [131079560010] |How does one find and replace text in all open files with geany? [131079560020] |How does one find and replace text in all open files with geany? [131079570010] |Menu Search->Replace (or Ctrl+h). [131079570020] |Fill in find and replace boxes, expand Replace All, click In Session [131079570030] |

    Step-by-step:

    [131079570040] |

    Select "Replace" from Search menu.

    [131079570050] |

    Expand "Replace All"

    [131079570060] |

    Click "In Session"

    [131079580010] |How does one find and replace text in all open files with gedit? [131079580020] |How does one find and replace text in all open files with gedit? [131079590010] |This is not possible with a stock gedit; there's an open ubuntu brainstorm idea for adding the ability. [131079590020] |However, there are plugins that add it, such as advanced-find. [131079590030] |If you install that, one of the sections on the "Advanced Find/Replace" dialog is "Scope"; choose "All Opened Documents": [131079600010] |Generally people who want to do this write an ed script and run it against all the files. [131079600020] |E.g.: [131079600030] |And then run it like this [131079600040] |You can also use an ex script which allows you to use all the : commands from vi. [131079600050] |It's the same binary as vi just called using the command ex to start without the gui. [131079610010] |How does one find and replace text in all open files with LibreOffice? [131079610020] |How does one find and replace text in all open files with LibreOffice? [131079620010] |the only way I can think of is with a script or macro. [131079620020] |I'd probably do it with python, since that's what I'm more familiar with, but there's some useful info on how to do it in OOBasic here. [131079630010] |How does one find and replace text in all open files with kate? [131079630020] |How does one find and replace text in all open files with kate? [131079640010] |The answer is simple. [131079640020] |As of Kate 3.4.3 (present in KDE 4.4.3) you can not replace in multiple files at once but just in the one you're currently viewing calling "edit->replace" or with the CTRL+R shortcut. [131079650010] |How does one find and replace text in all open files with jed? [131079650020] |How does one find and replace text in all open files with jed? [131079660010] |Why root's default shell is configured differently with other normal user account's default shell? [131079660020] |As I know, root's default shell is configured csh and normal user's default shell is sh in FreeBSD. [131079660030] |And in Ubuntu, root is dash, normal user is bash. (refer: What's the Ubuntu's default shell?) [131079660040] |Why are they configured differently? [131079670010] |According to the FAQ: [131079670020] |In FreeBSD's case, the reason is that csh is the only shell "guaranteed" to be on the base filesystem (stuff from ports usually winds up in /usr/local/bin, which defaults to a different filesystem). [131079670030] |This is important because you don't ever want there to be a situation where root can't log in because it's using a shell on a different (unmounted) filesystem. [131079680010] |I'm failing to restore a VirtualBox VM [131079680020] |I am trying to restore a VM but I get this error message: [131079680030] |I think this happened because, while the VM was live, I removed one snapshot. [131079680040] |How do I fix this, short of restoring older snapshots? [131079680050] |NOTE: This problem happens when I use version 4.0.4. [131079680060] |Version 3.2.10 allows me to delete a snapshot of a VM, even though it's live. [131079680070] |I guess it's a regression... watch me downgrading. [131079690010] |If I understand the situation correctly, then it's quite a severe situation. [131079690020] |VirtualBox snapshots are incremental, so later ones depends on earlier ones. [131079690030] |When you delete one snapshot VirtualBox does some processing to "merge" the snapshots, that's why it's not very fast to delete a snapshot. [131079690040] |I haven't tried it, but VirtualBox shouldn't let you delete snapshots while the virtual machine is running. [131079690050] |In case you somehow managed to do it, I think all hope is lost. [131079690060] |I hope you can restore the machine to an earlier snapshot. [131079700010] |How to enable flash on Chromium [131079700020] |How do you get Adobe's flash video to play on Chromium? [131079700030] |Is it this painful, or is there somewhere an extension? [131079700040] |NOTE: On Debian, I install flashplugin-nonfree for Mozilla-based browsers. [131079700050] |Is there an equivalent package for Chrome? [131079700060] |UPDATE: I needed to re-install flashplugin-nonfree for this to work. [131079710010] |Chromium can also use the Mozilla plugins. [131079710020] |Just install it and it should work. [131079710030] |What distro are you using? [131079720010] |How is the linux graphics stack organised? [131079720020] |Can anybody explain(hopefully with a picture), how is the linux graphics stack organised? [131079720030] |I hear all the time about X/GTK/GNOME/KDE etc., but I really don't have any idea what they actually do and how they interact with each other and other portions of the stack. [131079720040] |How do Unity and Wayland fit in? [131079730010] |First of all, there is really no Linux graphics stack. [131079730020] |Linux has no graphical display capabilities. [131079730030] |However, Linux applications can use graphical displays and there are a number of different systems for doing that. [131079730040] |The most common ones are all built on top of X windows. [131079730050] |X is a network protocol because in the middle of an X protocol stack you can have a network as a standard component. [131079730060] |Let's look at a specific use case. [131079730070] |A physicist in Berlin wants to run an experiment at the CERN in Switzerland on one of the nuclear particle colliders. [131079730080] |He logs in remotely and runs a data analysis program on one of CERN's supercomputer arrays and graphs the results on his screen. [131079730090] |In Berlin, the physicist has an X-terminal device running some X-server software that is providing a graphical display capability to remote applications. [131079730100] |The X-server software has a framebuffer that talks to the specific device driver for the specific hardware. [131079730110] |And the X-server software speaks the X protocol. [131079730120] |So the layers might be graphical device->device driver->frame buffer->X server->X protocol. [131079730130] |Then, in Switzerland, the application connects to a display using the X protocol and sends graphic display commands like "draw rectangle" or "alpha blend". [131079730140] |The application probably uses a high level graphical library and that library will likely, in turn, be based on a lower level library. [131079730150] |For instance the application may be written in Python using the WxWidget toolkit which is built on top of GTK which uses a library called Cairo for core graphical drawing commands. [131079730160] |There may also be OPENGL also on top of Cairo. [131079730170] |The layers might be like this: WxWidgets->GTK->Cairo->X Toolkit->X protocol. [131079730180] |Clearly it is the protocol in the middle that connects things, and since Linux also supports UNIX sockets, a completely internal transport for data, these two types of things can run on one machine if you want to. [131079730190] |X refers to the protocol and anything fundamental to the architecture such as the X-server that runs the graphical display, pointing device and keyboard. [131079730200] |GTK and QT are two general purpose GUI libraries that support windows, dialogs, buttons, etc. [131079730210] |GNOME and KDE are two desktop environments that manage the windows on the graphical desktop, provide useful applets and thingies like button bars. [131079730220] |They also allow multiple applications to communicate through the X-server (X-terminal device) even if the apps are running on different remote computers. [131079730230] |For instance copy and paste is a form of interapplication communication. [131079730240] |GNOME is built on top of GTK. [131079730250] |KDE is build on top of QT. [131079730260] |And it is possible to run GNOME apps on a KDE desktop or KDE apps on a GNOME desktop because they all work with the same underlying X-protocol. [131079740010] |Linux on desktop and some servers is still all X and frame buffer graphics. [131079740020] |Under X window -It comes GTK+ and Qt, YES BOTH uses X system, again there are lot of desktop environments - Gnome, KDE uses X display and their shells, etc. [131079740030] |Btw, There was a recent video from linux conf (http://blip.tv/file/4693305/). [131079740040] |Keith Packard from Intel spoken about X and GL* It was interesting. [131079750010] |The X Window System uses a client-server architecture. [131079750020] |The X server runs on the machine that has the display (monitors + input devices), while X clients can run on any other machine, and connect to the X server using the X protocol (not directly, but rather by using a library, like Xlib, or the more modern non-blocking event-driven XCB). [131079750030] |The X protocol is designed to be extensible, and has many extensions (see xdpyinfo(1)). [131079750040] |The X server does only low level operations, like creating and destroying windows, doing drawing operations (nowadays most drawing is done on the client and sent as an image to the server), sending events to windows, ... [131079750050] |You can see how little an X server does by running X :1 &(use any number not already used by another X server) or Xephyr :1 &(Xephyr runs an X server embedded on your current X server) and then running xterm -display :1 &and switching to the new X server (you may need to setup X authorization using xauth(1)). [131079750060] |As you can see, the X server does very little, it doesn't draw title bars, doesn't do window minimization/iconification, doesn't manage window placement... [131079750070] |Of course, you can control window placement manually running a command like xterm -geometry -0-0, but you will usually have an special X client doing the above things. [131079750080] |This client is called a window manager. [131079750090] |There can only be one window manager active at a time. [131079750100] |If you still have open the bare X server of the previous commands, you can try to run a window manager on it, like twm, metacity, kwin, compiz, larswm, pawm, ... [131079750110] |As we said, X only does low level operations, and doesn't provide higher level concepts as pushbuttons, menus, toolbars, ... [131079750120] |These are provided by libraries called toolkits, e.g: Xaw, GTK, Qt, FLTK, ... [131079750130] |Desktop environments are collections of programs designed to provide a unified user experience. [131079750140] |So desktop environments typically provides panels, application launchers, system trays, control panels, configuration infrastructure (where to save settings). [131079750150] |Some well known desktop environments are KDE (built using the Qt toolkit), Gnome (using GTK), Enlightenment (using its own toolkit libraries), ... [131079750160] |Some modern desktop effects are best done using 3d hardware. [131079750170] |So a new component appears, the composite manager. [131079750180] |An X extension, the XComposite extension, sends window contents to the composite manager. [131079750190] |The composite manager converts those contents to textures and uses 3d hardware via OpenGL to compose them in many ways (alpha blending, 3d projections, ...). [131079750200] |Not so long ago, the X server talked directly to hardware devices. [131079750210] |A significant portion of this device handling has been moving to the OS kernel: DRI (permitting access to 3d hardware by X and direct rendering clients), evdev (unified interface for input device handling), KMS (moving graphics mode setting to the kernel), GEM/TTM (texture memory management). [131079750220] |So, with the complexity of device handling now mostly outside of X, it became easier to experiment with simplified window systems. [131079750230] |Wayland is a window system based on the composite manager concept, i.e. the window system is the composite manager. [131079750240] |Wayland makes use of the device handling that has moved out of X and renders using OpenGL. [131079750250] |As for Unity, it's a desktop environment designed to have a user interface suitable for netbooks. [131079760010] |The traditional stack is built upon 3 main components: [131079760020] |
  • X server which handles displaying
  • [131079760030] |
  • Window manager that put windows into frames, handles minimizing windows etc. [131079760040] |That's part of separation of mechanism from policy in Unix
  • [131079760050] |
  • Clients that perform useful task as displaying stackexchange website. [131079760060] |They may use X protocol directly (suicide), use xlib or xcb (slightly easier) or use toolkit such as GTK+ or QT.
  • [131079760070] |The X architecture was made network hence allowing the clients to be on separate host then server. [131079760080] |So far so good. [131079760090] |However that was the image from way back. [131079760100] |Nowadays it is not the CPU that handles the graphics but the GPU. [131079760110] |There have been various attempts to incorporate it in the model - and place when kernel come into place to greater extend. [131079760120] |Firstly there have been made some assumptions regarding the usage of graphic card. [131079760130] |For example only the on-screen rendering was assumed. [131079760140] |I cannot find this information on wikipedia right now but DRI 1 also assumed that only one application will use OpenGL at the same time (I cannot quote right away but I can dig the bug up where it was close as WONTFIX with note to wait to DRI 2). [131079760150] |A few temporary solution have been proposed for indirect rendering (needed for composite WM): [131079760160] |
  • XGL - early proposition supporting applications talking directly to card
  • [131079760170] |
  • AIGLX - accepted proposition that uses network properties of OpenGL protocol
  • [131079760180] |
  • Proprietary solution of NVidia
  • [131079760190] |The works on newer architecture (DRI 2) was started. [131079760200] |That included: [131079760210] |
  • In-kernel support for memory handling (GEM/TTM)
  • [131079760220] |
  • Kernel modesetting (KMS) allowing to change resolution in-kernel hence avoiding delays when switching between X and console &few other features (like displaying message on panic even if X are running - which IIRC is planned to be implemented).
  • [131079760230] |Somehow orthogonal to move to kernel the work on Gallium drivers have been started. [131079760240] |Mesa library started as implementation of OpenGL on CPU and then it started using GPU acceleration. [131079760250] |It always have been tighten to OpenGL. [131079760260] |In OpenGL 3.0 the model have changed significantly and the rewrite of library was inevitable. [131079760270] |However they are taking the opportunity to split the code into several layer extracting common code and providing low-level API allowing to implement various 3D API on top of it - allowing for example Wine to provide DirectX talking directly to Gallium instead of going through the OpenGL API layer (which may not have direct 1-1 calls). [131079760280] |Wayland is a projects which considers the above a bit complicated and with too "history". [131079760290] |The design from 1984 (although highly modified and adapted) does not corresponds to beginning of 21 c. according to proponents. [131079760300] |It is suppose to offer greater flexibility and better performance although it is still missing some important features like full OpenGL support (and important for some - network support). [131079760310] |A bit more about desktop environments and window managers. [131079760320] |The window manager is an application responsible for how window will behave - for example it is responsible for managing workspaces, drawing titlebar(the thing on top of screen with windo title and minimise/maximise/close buttons) etc. [131079760330] |Firstly only minimal WM was used but afterwards the user started wanting desktop environments - i.e. more featured version, which included menu start, desktop background etc. [131079760340] |However most parts of the desktop environment is not a window manager although they are often integrated. [131079760350] |After some time the composite WM were introduced which uses OpenGL and indirect rendering to do their work. [131079760360] |They provides not only graphical effects but also allow easier implementation of some accessibility features (like magnifier).