Log in

No account? Create an account

This · is · Not · a · Brain · Surgery

random thoughts on technology

Recent Entries · Archive · Friends · Profile

* * *
As part of a small research project I have set to built a small sensor network to monitor temperatures at my home and office. Doing this I did some hardware (wireless Zigbee temperature sensors) and software to collect data from existing ones. The purpose of this post is to share quick links to scripts I have developed.

1. Main project page: https://github.com/vzaliva/xbee_temp_sensor contains Wireless sensor harware design and a collection scripts for:
a) Collecting data from my XBee-based sensors
b) Collecting data from 3M CT-50 Radio thermostat
c) Recording data from Weatherunderground.com weather stations
d) Submitting data to COSM (now renamed to XIVELY.com)

2. Data collection from a cheap USB temperature sensor and COSM submission: https://github.com/vzaliva/temper-python

3. Data collection from NEST thermostat and COSM submission: https://github.com/vzaliva/pynest

4. Data collection from Linux PC sensors (LMSENSORS) and COSM submission: https://github.com/vzaliva/lmsensors-cosm

If you want to learn more about the project, you can read my preliminary report or take a look at the presentation.
* * *
I probably already mentioned weather similarity service. Today I got curious: how the map of earth would look like if cities are organized by temperature similarity, no matter where they are located geographically. Below is a picture of what I got:

weather similarity map

(I have labelled only cities with population over 2 million).

I wish I had a better way to show it in the browser. It would be nice to be able to zoom and show tooltips with city names. I have not find readily available JavaScript library for this and I do not have time to code this myself. If somebody interested in visualizing this I will be glad to share my data.
* * *
Once you switch to "retina" display on MacBook Pro, there is no going back. You just could not stop seeing huge pixels and artifacts on lower resolution screens. So, naturally when buying various devices with screens you want to keep an eye on PPI (pixels per inch). Here are some numbers (in decreasing order):

Nokia Lumia 920 - 332 PPI
iPhone 4/4S/5 Retina - 326 PPI
Samsung Galaxy S3 - 306 PPI
iPad 3 - 264 PPI
MacBook Pro 15-inch Retina display - 220 PPI
Amazon Kindle/Kindle Touch - 167 PPI
Microsoft Surface - 148 PPI
iPad 2 - 132 PPI
MacBook Air 13-inch - 128 PPI

(More numbers here)
* * *
Few people at our company took Coursera "Functional Programming Principles in Scala" class. I think this is great when developers try to learn new languages and technologies and we actually pay a small bonus for taking such classes. 

In one of his presentations Martin Odersky has shown a slide showing a distribution of students per country (also shown on this map). Unfortunately it is not very useful as it is not adjusted for country population. So my fiend Alex wrote a small Scala program to parse JSON and extract the data, and I have used Mathematica to adjust it by country population. The table with results, could be seen here. The bar chart showing top 40 countries in precent of their population taken the class looks like this:

Scala Students

USA is right behind Ukraine!
* * *
As Amazon Prime Customer I have access to Prime Instant Video. I have a PS3 with Amazon Instant Video application connected to my TV which I tried to use to watch some movies. Surprisingly the playback was freezing all the time and the application was complaining about insufficicent bandwidth. I have Comcast cable and SpeedTest clocks it at 30Mbps/5Mbps. Out of curiosity I tried instant video playback on my Mac and it has shown that it is getting only aroung 300Kbps of bandwidth.

To make long story short the problem was in my DNS settings. I was using Google Public DNS at home. Apparently this prevented Amazon CDN from properly detecting my location and streaming video from the closest server. Once I have switched back to Comcast DNS, video playback speed has improved and I am now able to stream movies in full HD.

I must report that after switching back to Comcast DNS I have noticed that my web page browsing become slower. Apparently Google DNS is indeed faster!
* * *
* * *
While reading recently about Armelle Caron work on city maps deconstruction it occured to me that city maps could be analyzed using mathematical tools for image analysis. I started by looking looking at city grids of Paris and New York:

New York:
New York

You can see lot of straight streets in NY while Paris is more curved. This could be more clearly seen by applying Hough Transform:

New York:

(you can click on images for larger resolution versions)

We can also explore spatial frequencies of these maps using Discrete Fourier transform:

New York:

I will not elaborate here how these images should be interpreted. If you are unfamiliar with them and interested to learn more you can read about these transformations which are well described in many other sources.
* * *
I was recently working on a computer vision problem which I would like to share with you. We were dealing with images of a restaurant menu containing mostly text. Images were taken with a mobile phone camera while the user held the menu in his hand. Typically the image captures just part of the page (page boundaries were not necessarily included). The challenge was to perform perspective correction on such images. We treated this as a practical, rather than an academic problem and thus did not aim to get the perfect solution, so we were ready to accept one offering only a partial improvement.
The main, hopefully new, contributions of our approach are:
  1. Partial, horizontal-only perspective correction.
  2. A new, statistical approach of choosing a horizontal vanishing point.
The problem of perspective correction is well studied. Please see references at the end of this post for related work. The basic idea is as follows: Given a pair of parallel lines, we can find a vanishing point. In a non-distorted image, all vanishing points lie at the line at infinity[1]. However, in a image projection, parallel lines may intersect at a real point. Given two pairs of lines which are supposed to be parallel (before projection), we can find two vanishing points which would define a line at infinity after perspective distortion. The idea is illustrated below. Vh and Vv are horizontal and vertical vanishing points respectively.
Now, working in homogeneous coordinates [5], we can build a homography [7], which would translate this line to an ideal line at infinity. Applying this homography to all pixels of a perspective-distorted image would allow us to reconstruct the original. 
Our first task is to find two pairs of line in the distorted imags which were parallel in the original image. One obvious idea is to find horizontal lines corresponding to lines of printed text. The intersection of two such lines would give us a horizontal vanishing point. For the type of images we are dealing with, unfortunately there is a no easy way to detect vertical lines. In absence of vertical page margins in typical images, there are simply not enough vertical features with which to align such lines. So we must resign ourselves to partial perspective correction, using the horizontal vanishing point alone. This is done by assuming that the vertical vanishing point is located at infinity in the positive direction of the y axis. The coordinate of such a point in homogeneous coordinates would be (0,1,0), with the last zero indicating an ideal point (point at infinity).

Starting with a pre-processed image (already converted to black and white, we will apply Hough transform [6] to detect straight lines. Because we are interested in horizontal lines, assuming that we are not dealing with extreme cases of images being significantly distorted, we can limit the line angles that we consider to +-Pi/3 from the x axis. The transformation could be applied to a scaled down image, provided that the aspect ratio is preserved. This would help us to speed up the computations. Next, we will threshold the results of the Hough transform and convert the resulting detected lines from polar to homogeneous coordinates. As a result, we will have a small set of potentially horizontal lines. In theory any pair of them should suffice, but in practice, some detected lines may not correspond to horizontal lines of text, but represent a noise or other image features.

To select the two most suitable lines from this set, we will use a statistical approach. First we will build a pairwise intersection of all lines from the set. In homogeneous coordinates, the intersection of two lines is a cross product of their coordinate vectors. This gives us a set of a potential horizontal vanishing points. We will filter this set excluding points at infinity, points falling within the bounding rectangle of the projected image, and to exclude extreme cases of perspective distortion, the points located too close to the center of coordinates in the horizontal direction. The “too close” criteria is expressed as a threshold on an absolute value of the  ratio  of the x coordinate of a potential horizontal vanishing point to the image width.  Each of the remaining points could be used to calculate a homography used to perform horizontal perspective correction:
where (a,b,c) is a projection of the line at infinity calculated as the cross product of Vx and Vy=(0,1,0). As a result of such correction, the two lines used to produce a chosen point will become parallel. Our working assumption is that because of the regular text line structure of the original image, most of the lines we detected were parallel. This allows us to define an evaluation metric of suitability for a homography as the standard deviation of the angles between all corrected lines and the x axis. The vanishing point producing the minimal standard deviation will correspond to the transformation which makes the set of lines the closest to parallel.
The resulting homography performs perspective correction, making our set of original horizontal lines to become closer to parallel after transformation. However, this does not make them actually horizontal. We need an additional affine rotation homography to achieve this. After perspective transformation, we will take the mean value of angles between lines in our set and  the x axis as a rotation angle theta. The homography representing  this rotation would be:

The final homography combining perspective correction and rotation is the combination of the two transformations
H = Ha Hp

which coincidentally have the form:
Sample results of the algorithm are shown below. Lines used for horizontal correction are shown in orange. The original image:  

The corrected image:

[1] R. Hartley and A. Zisserman, "A multiple view geometry in computer vision", Second Edition, Cambridge University Press , 2004.
[2] P. Clark, “Estimating the orientation and recovery of text planes in a single image,” Proceedings of the 12th British Machine, 2001.
[3] V. Cantoni, L. Lombardi, and M. Porta, “Vanishing point detection: Representation analysis and new approaches,” Image Analysis and, no. Iciap, 2001.
[4] L. Jagannathan, “Perspective correction methods for camera based document analysis,” on Camera-based Document Analysis and, pp. 148-154, 2005.
[5] Wikipedia contributors. Homogeneous coordinates. Wikipedia, The Free Encyclopedia. May 22, 2012, 07:12 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Homogeneous_coordinates&oldid=493787558. Accessed June 12, 2012.
[6] Wikipedia contributors. Hough transform. Wikipedia, The Free Encyclopedia. June 11, 2012, 00:12 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Hough_transform&oldid=496983042. Accessed June 12, 2012.
[7] Wikipedia contributors. Homography. Wikipedia, The Free Encyclopedia. March 12, 2012, 06:33 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Homography&oldid=481473030. Accessed June 12, 2012.
* * *
My 9 y.o kid is using Internet. I see nothing wrong with that. Internet is no longer an optional part of this world and he needs to be friendly with it. He reads Wikipedia, visits LEGO-related sites, etc. I have enabled parental controls on his Mac, so hopefully some inappropriate web sites are blocked. The only problem I have is amount of time he spends on YouTube. YouTube could be an amazing educational resource but at the same time it could be an amazing time waster. I decided to limit amount of time he spends on YouTube, like many parents limit amount of time their kids spend in front on TV. I found no easy way to do this short of blocking his complete access to computer according to pre-defined schedule. But I wanted to block only YouTube and let him use everything else. Below I will share my solution to the problem. It is a bit involved, and requires you to have Linux box on your home network, but this is the only working solution I found so far.

The trick is to use HTTP proxy (Squid) with squidguard plugin. On RHEL you can install and activate required packages using following commands:

yum install squid squidguard squidguard-blacklists
chkconfig squid on

Let us start with default config file:

cd /etc/squid
cp squidguard-blacklists.conf squidguard.conf

Now edit squidguard.conf, adding following lines at the bottom:

time playhours {
weekly * 19:00-20:00

src all {

### ACL definition
acl {
all within playhours {
pass good !bad !adult !aggressive !hacking !warez any
redirect 302:http://localhost/access-denied.html?url=%u
} else {
pass good !bad !adult !aggressive !audio-video !hacking !warez any
redirect 302:http://localhost/access-denied.html?url=%u

default {
pass none
redirect http://localhost/block.html

My home network is This config allows to access video web sites (which include YouTube) only
during 1 hour: from 7pm to 8pm every day. You can devise more complex schedule if you want to. In addition
to blocking YouTube this config blocks various adult and hacking web sites at all times, as extra level of protection.

To test your config you can use this handy command:

echo "http://www.youtube.com - - GET" | sudo -u squid squidGuard -c /etc/squid/squidguard.conf -d

To activate it, edit squid.conf adding the following line:

url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidguard.conf

Now all you need to set up your child computer to use HTTP proxy, pointing to IP of squid computer and default
port 3128. Unfortunately due to the bug in MacOS versions up to Lion you need to disable parental controls on the account to make proxy settings work. 

As an extra measure of protection, you can use OpenDNS free parental controls. Here is how my OpenDNS dashboard looks like:

Screen Shot 2012-02-24 at 17.48.48

After setting up OpenDNS account all you need to do is to add the following line to squid.conf:


Hint: if you have dynamic IP, you can use ddclient to update it at OpenDNS site using this config.

And here is what is shown if YouTube is accessed during not permitted time slot:

Screen Shot 2012-02-24 at 17.50.49
* * *
Designing mobile user interface is not very complex. Just use common sense, think about user, and keep an eye on limited screen estate. But, some people just do not get it. For example, Foursquare Android App, which I am using daily, is driving me nuts. Take a look at the first screen I see when it is started:

4sq screenshot

It opens in "Friends" tab, which supposed to show me where my friends are. OK. Makes sense. What do I see:

1. It shows me where I am. Doh! I know where I am. No need to remind me. I want to see my friends, and given limited screen space it does not make sense to waste it on reminding me where I am now.

2. It shows me same information again, this time under "Last 3 hours" section. This is just plain stupid.

At this point, 1/3 of screen space which could have been used to show my friend’s locations have been wasted just to remind me that I am Starbucks where I have checked in just 2 minutes ago. Taking into account additional space used by three levels of toolbars on top, just enough useful screen space left to shows me the locations for only two of my friends.
* * *
There is a lot of buzz about BitCoin in the news recently. Most people are exited about distributed, untraceable, and unregulated nature of this new currency. However, there is another overlooked aspect of it, related to the practice of "bitcoin mining." Anybody now can buy some hardware and start minting his own bitcoins. Initial investment in hardware aside, all you need now is to spend some electricity to produce money (bitcoins). Depending on your electricity costs and bitcoin exchange rates you can make small but steady monthly income.

An interesting aspect of this electricity-to-money conversion is that electricity does not need to be transferred far and could be consumed very close to a generator producing it. For example, you can have a module with solar battery on top and bitcoin-generating computer inside which you can put anywhere to get free money literally growing under a sun!

For example, the picture below shows 1000w Solar Photovoltic System. It generates enough electricity to power a PC with 2-4 powerful GPU cards, capable of producing very decent bitcoin mining performance.

Tags: ,
* * *
* * *

Previous · Next