andriod-cpu

Installing CyanogenMod onto a Samsung Galaxy S3 (International) – SIII / i9300 using Linux Mint 16 Petra (Ubuntu Saucy 13.10)

DISCLAIMER: Modifying or replacing your device’s software may void your device’s warranty, lead to data loss, hair loss, financial loss, privacy loss, security breaches, or other damage, and therefore must be done entirely at your own risk. I am not responsible for your actions. Good luck.

The new CyanogenMod installer requires Windows. Most of the manual tutorials assume you’re running Windows. Hopefully my summarised experience below will help other Linux users.

Continue reading

PostgreSQL functions and triggers

It is desirable to avoid using database functions. They are black boxes which gift only night horrors to unwary developers. A system is difficult to understand, maintain and debug when chunks of it lurk unseen in the DB.

Despite this – for certain features – using them does make sense. Provided the database function’s code is source controlled and huge lumps of comments referring to both the location and function of said code is spread evenly throughout the associated application code.

For example, in order to bill Ekaya agents accurately we needed a log of show house status changes, from upcoming to active to past or cancelled. Most of these changes were implemented in sweeping SQL statements that, while efficient at their own task, made it difficult to track individual changes.

Continue reading

classic date

Postgresql date formatting

Preamble

Formatting of data should only occur in the final steps of output. Until that point, and as a rule, internally data should remain in a base format that can easily be converted into many others – without being converted into another more basic format first.

For example Unix timestamp for dates or a floating point number for money. Both can readily be converted into more expressive formats without writing code to first parse or disassemble the initial format.

However in a situation where the flow is very specific and unlikely to ever be used to generate a different output it is permissible, even desirable, to generate data in the format it will be finally outputted.

Continue reading

broken_dvds

Inside every DVD is a small movie trying to get out. Part 1: A quick guide to K9Copy

The problem with backing up a regular store bought movie DVD is that it simply won’t fit into a normal blank DVD. The movie DVD is a 9GB monster, and the blank a svelte 4.4GB. There are two solutions to this: re-encoding and transcoding. This article is a quick guide to performing a transcode using K9Copy (similar to DVDShrink) on GNU/Linux.

Simplifying terribly, re-encoding takes the better part of a day or longer and the result is an .avi or similar file. These are so much smaller than the original that you can fit four or more reencoded movies onto a single 4.4GB DVD, but they won’t play in a regular DVD player. On the other hand a transcode takes about an hour and results in a single movie on 4.4GB DVD, which will play on a regular DVD player.

Transcoding works by lowering the quality of the movie to make it smaller. The smaller the desired end result, the worse it is going to look, but with a normal sized movie you shouldn’t be able to tell the difference.

You will need

  • To have successfully consulted your distros documentation and installed K9Copy.
  • I’ve heard that you will need about 8GB of hard drive space free. Never tested this.
  • Optional: Also have installed MPlayer.

Method

  1. Insert the DVD you want to copy.
  2. After it has loaded, open K9Copy.
  3. Press Open. I think this is the biggest barrier to entry in this program.

  4. K9Copy should load a tree structure showing all the titles available on the DVD.

    A DVD can contain up to 99 titles, one of them will be the movie you’re looking for. The others are things like menus, extras, trailers and warnings. As a rule of thumb the largest title is the one you’re looking for. To check you have the right one, run the following in the command line (replace n with a number from 1 to 99):

    mplayer dvd://n

    Other things to try while viewing in Mplayer is pressing “#” to cycle through the audio tracks in the title and “j” to cycle through subtitles.

  5. Open the title you want. Don’t be confused by the titlesets, you want the title, in the image below the title I want is number 1 (mplayer dvd://1) but it is part of titleset 5 – I honestly don’t know what titlesets are).

    You can transcode the entire DVD: menus, extras, trailers and all, but those take up space which will leaves less space for the actual movie, and the end result may suffer. So it’s a tradeoff, I tend to only keep the main feature.

  6. Under the title, select the video, audio, and subpictures (subtitles) you want to copy. In the image below I have selected the video and only one audio stream, for this movie I don’t need the other audio streams nor do I need any of the subtitles.

  7. Select Copy, find somewhere to save it, and wait.

First the DVD will be copied to your hard drive and then formatted into an .iso image that you can easily burn to a blank DVD. If you want to check your iso image before burning you can do so with this command (replace filename.iso with the name of the file you created):

mplayer dvd:// -dvd-device filename.iso

As before you can cycle through your audio and subtitle tracks to see that they are all there.

Note: You will not be able to see any copied menus in MPlayer, these will only show up when you place the burnt DVD into a DVD player.

Hope this helped you, in part 2, I will describe reencoding a DVD into an H.264 mkv file.

Bull-elephant-drinking-WEB_296731-590x393

PHP cURL over SSL

If you’re getting stuck trying to use cURL over https in PHP, try setting both the verify peer and verify host options to false:

$url = 'https://myverysecret.domain/secrets/';
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false);
curl_exec($curl);
HBC

PAM and the Bad password blues

Warning: Only try this at home! Using weak passwords on a computer that this is accessable from the wild and dark Internet, is tantamount to walking up to a spammer and saying “I’d simply love to be part of your zombie network—where do I sign up?”. I could safely do the following because this server is not accessible from the Internet and never will be, it’s a local test box for my own personal use.

I was creating a new user on a local CentOS 5.3 VirtualBox and while I was setting the password I received the following error: BAD PASSWORD: it is based on a dictionary word

After soul searching I found I didn’t feel coming up and then remembering a complicated enough password to make PAM happy, ie. not a dictionary word, long enough etc.

So I spent a while reading up on PAM—which, as it turns out, is a small team of alluring ladies and well worth stealing a look at.

Turns out my problem has a name, and that name is pam_cracklib.so. Ms. CrackLib will diligently check a new password against her dictionary and then check whether it is significantly different from the previous version, whether it is long enough, etc. Much of what she does is negotiable, but the dictionary check in the beginning she won’t budge on.

So either we mess with her dictionary reading abilities—by say giving her a blank dictionary or hiding her glasses—or we take her out of the loop completely. I opted for the latter and set about cutting her out of my life.

The surgery took place in /etc/pam.d/system-auth. I took the following lines:

password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_authtok

and turned them into:

\#password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass

Just commented out the pam_cracklib.so line and removed the use_authtok from the line below, otherwise passwd complains about Authentication information cannot be recovered

SQL – Find the last DISTINCT items

This took me a while to figure out, so thought it worth documenting. Here is a simplified example that explains the problem and solution, tested on MySQL 5.0.

An office worker keeps track of who she has contacted. After a while she builds up a table as follows:

Id contact
1 josef
2 harry
3 sally
4 pudding
5 pudding
6 sally
7 harry
8 sally

Now she needs to see who she contacted the most recently and in what order. In other words, with the above data she wants the following list: sally, harry, pudding, josef.

Her initial immediate reaction is SQL like this:

SELECT DISTINCT contact FROM table ORDER BY id DESC;

This returns the correct data, but in the wrong order: pudding, sally, harry, josef. This is because DISTINCT seems to remember only the first instance as it appears in the table, all the rest are considered duplicates and ignored.

After some effort, the solution turns out to be not to use DISTINCT, but rather GROUP BY and ORDER BY MAX() to invoke magic:

SELECT contact FROM table GROUP BY contact ORDER BY MAX(id) DESC;

Excitingly that returns the correct data in the correct order. We could still use a DISTINCT in there, but it is superfluous and would add unnecessary computation, GROUP BY does the job of DISTINCT.

The example could be further complicated. For example the the table could have a timestamp added (ORDER BY MAX would work on that too), and a user id so that multiple office workers could use it (include a WHERE user_id = x to find only for a particular user).

Monitoring bandwidth with bwm-ng

I discovered how to watch the bandwidth of server today using a tool called bwm-ng. It’s in the Ubuntu Hardy repos, you can install with the following command:

sudo aptitude install bwm-ng

When you run it in a terminal without parameters, it displays a running update of how much bandwidth is being used. This makes it a handy tool to get an immediate idea of what your network traffic is like, and there are also options to examine disk IO. Besides the “live running commentary” mode, it can also output in other forms as well, for example CSV.

To create a poor man’s ntop, i.e. generate a log file of a servers bandwidth throughput over time; I wrote a script that is cronned every minute, the interesting part looks like this:

bwm-ng -o csv -c 6 -T rate -I eth0 >>bandwidth.log

Here‘s a run down of what the parameters do:

-o csv
Output to CSV format. Annoyingly you have to download the tarball from the site linked above to get the file which contains the legend for the CSV generated. Why not just include it in the man page? Being the considerate type I have included this information later in this post.
-c 6
Print only 6 reads. Not 6 lines, but 6 reads of all the interfaces, including a total line.
-T rate
Type rate. Show only the rate, other options for this parameter are avg and max. Without this parameter the output also show amount of data and packets transferred.
-I eth0
Comma separated whitelist of interfaces to show. Can also be used to blacklist interfaces. Even with only one interface this still prints a total line.

This generates the something which looks like this:

1222420795;eth0;526.73;237.62;764.36;120;266;1.98;3.96;5.94;2;1;0.00;0.00;0;0
1222420795;total;526.73;237.62;764.36;120;266;1.98;3.96;5.94;2;1;0.00;0.00;0;0

And as promised the legend for the output:

unix_timestamp;iface_name;bytes_out;bytes_in;bytes_total;packets_out;packets_in;packets_total;errors_out;errors_in

All this data is then appended to a file called bandwidth.log ( >>bandwidth.log ).