Java guy reboots with C++ – Trying to understand Memory Mangement

Some random hints if you get started (in my case restarted) with C++ – let me know if I wrote something in wrong in my personal memos:

  1. Use RAII: Resource Acquisition Is Initialization
  2. Use scoped variables which gets destroyed automatically after leaving the block:
    MyClass myVar("Hello Memory");
  3. To use call by reference use the method declaration. But it might be that you are optimizing it when not necessary. Read about Want Speed? Pass by Value.
    void myMethod(MyClass& something)
  4. Understand The Rule of three: If you need to explicitly declare either the destructor, copy constructor or copy assignment operator yourself, you probably need to explicitly declare all three of them. More C++ Idioms
  5. Understand and avoid traps, understand shallow references.
  6. Understandheap vs. stack allocation. E.g. also that heap allocation times are much slower than allocations off the stack.
  7. Use heap allocation when you dynamically want to change memory usage of an object
  8. … or prefer stl (e.g. vector)
  9. … or when an object should be used outside of a method scope:All dynamically allocated memory must be released before the pointer (except smart pointers) pointing to it goes out of scope. So, if the memory is dynamically allocated for a variable within a function, the memory should be released within the function unless a pointer to it is returned or stored by that function.
  10. The = operator in auto_ptr works in a different to normal way!
  11. Read the FAQ or this light-FAQ
  12. oh my: auto_ptr, shared_ptr, smart_ptr, …!!?
  13. Here is a nice compilation of common possible pitfalls.

… but I’m still fighting a bit to understand the memory management problematic:

1. Are there some rules of thumb?

E.g.

  1. use RAI
  2. if you cannot apply rule 1. use shared_ptr
  3. if you cannot apply rule 1. use new + delete?

2. And how can I solve the problem in C++ when I return a local constructed object (via new) in Java?

E.g. the factory pattern there can look like

public static MyClass createObj() {
  return new MyClass()
}

3. And how would you e.g. put a vector with a lot of data into a different variable?

Can I rely on the ‘Pass by Value‘ thing which boost performance? Should I use tmpVectorObj.swap(vectorObj2) ?

4. How would you fill a vector within a method?

This is forbidden I think:

// declare vector<Node> vectorObj in the class
void addSomething(string tmp) {
  Node n(tmp);
  vectorObj.add(n);
}

5. What are the disadvantages of boosts smart pointers?

Ugly but most efficient setSize for ArrayList


private static final Field sizeField;

static {
    try {
        sizeField = ArrayList.class.getDeclaredField("size");
        sizeField.setAccessible(true);
    } catch (Exception ex) {
        throw new RuntimeException(ex);
    }
}

// 'setSize'
public static <T> void growSize(final ArrayList<T> list, final int maxSize) {
    if (maxSize <= list.size())
        return;

    list.ensureCapacity(maxSize);
    try {
        sizeField.setInt(list, maxSize);
    } catch (Exception ex) {
        throw new RuntimeException("Problem while setting private size field of ArrayList", ex);
    }
}

Here is the reported issue and here a discussion. When you need decreasing the size too then have a look into Vector.setSize

A less hacky but also less efficient version would be:

public static <T> void growSizeSlower(final ArrayList<T> list, final int maxSize) {
    if (maxSize <= list.size())
        return;

    list.addAll(new AbstractList<T>() {

        @Override public Object[] toArray() {
            return new Object[maxSize - list.size()];
        }

        @Override public T get(int index) {
            throw new UnsupportedOperationException("Not supported yet.");
        }

        @Override public int size() {
            throw new UnsupportedOperationException("Not supported yet.");
        }
    });
}

EC2 = Easy Cloud 2.0? Getting started with the Amazon Cloud

If you are command line centric guy like me and you are on Ubuntu this post is for you.Getting starting with Amazon was a pain for me although once you understand the basics it is relativly easy. BTW: there are of course also other cloud systems like Rackspace or Azure.

If you want the official Ubuntu LTS Server (currently 10.04) running in the Amazon Cloud you can do:

ec2-run-instances ami-c00e3cb4 --region eu-west-1 --instance-type m1.small --key amazon-key

or go to this page and pick a different AMI. Hmmh, you are already sick of all the wording like AMI, EC2 and instances? Ok,

lets digg into the amazon world.

Let me know if I have something missing or incorrect:

  • AMI: Amazon Machine Image. This is a highly tuned linux distribution in our case and we can choose from a lot of different types – e.g. on this page.
  • EC2: Elastic Compute Cloud – which is a highly scalable hosting solution where you have root access to the server. You can choose the power and RAM of that instance (‘a server’) and start and stop instances as you like. In Germany Amazon is relative expensive compared to existing hosting solutions (not that case in the US). And since those services can also easy scale there is nearly no advantage of using Amazon or Rackspace.
  • EBS: Elastic block storage – This is where we store our data. An EBS can be attached to any instance but in my case I don’t need a separate volume I just can use the default EBS mounted at /mnt with ~150 GB or even the system partition / with ~8 GB. From wikipedia:
    EBS volumes provide persistent storage independent of the lifetime of the EC2 instance, and act much like hard drives on a real server.
    Also if you choose storage of type ‘ebs’ your instance can be stopped. If it is of type instance-store you could only clone the AMI and terminate. If you try to stop it you’ll get “The instance does not have an ‘ebs’ root device type and cannot be stopped.”
  • A running instance is always attached to one key (a named public key). Once started you cannot change it.
  • S3: Simple Storage Service. Can be used for e.g. backup purposes, has an own API (REST or SOAP). Not covered in this mini post.
  • Availability zone: The datacenter location e.g. eu-west-1 is Ireland or us-west-2 is Oregon. The advantage of having different regions/zones is that if one datacenter crashes you have a fall back in a different. But the big disadvantage of different zones is that e.g. transfering your customized AMIs to a different region is a bit complex and you’ll need to import your keys again etc.

But even now after ‘understanding’ of the wording it is not that easy to get started and e.g. the above command will not work out of the box.

To make the above command working you’ll need:

  1. An Amazon Account and a lot of money 😉 or use the micro instance which is free for one year and for a fresh account IMO
  2. The ec2 tools installed locally: sudo apt-get install ec2-api-tools
  3. The amazon credentials stored and added to your ssh-agent:
    export EC2_PRIVATE_KEY=/home/user/.ssh/certificate-privatekey.pem
    export EC2_CERT=/home/user/.ssh/certificate.pem
  4. Test the functionality via
    ec2-describe-instances –region eu-west-1
  5. Now you need to create a key pair and import the public one into your account (choose the right availability zone!)
    Aws Console -> Ec2 -> Network & Security -> Key Pairs -> Import Key Pair and choose amazon-key as name
  6. Then feed your local ssh-agent with the private key:
    ssh-add /home/user/.ssh/amazon-key
  7. Now you should be able to run the above command. To view the instance from the web UI you’ll have to refresh the site.
  8. Open port 22 for the default security group:
    Aws Console -> Ec2 -> Network & Security -> Security Groups -> Click on the default one and then on the ‘inbound’ Tab -> type ’22’ in port range -> Add Rule -> delete the other configurations -> Apply Rule Changes
  9. Now try to login
    ssh ubuntu@ec2-your-machine.amazonaws.com
    For the official amazon AMIs you’ll have to use ec2-user as login

That was easy 🙂 No?

Ok, now you’ll have to configure and install software as you like e.g.
sudo apt-get update && sudo apt-get upgrade -y

To proceed further you could

  • Attach a static IP to the instance so that external applications do not need to be changed after you moved the instance – or use that IP for your load balancer – or use the Amazon load balancer etc:
    Aws Console -> Ec2 -> Network & Security -> Elastic IPs -> Allocate New Address
  • Open some more ports like port 80
  • Or you could create an AMI of your already configured system. You can even publish this custom AMI.
  • Run ElasticSearch as search server in the cloud e.g. even via a debian package which makes it very easy.

Now if you have several instance and you want to

update software on all machines.

How would you do that? Here is one possibility

ips=`ec2-describe-instances --region eu-west-1 | grep running | cut -f17 | tr '\n' ' '`

for IP in $ips
do
 echo UPDATING $IP;
 ssh -A ubuntu@$IP "cd /somewhere; bash ./scripts/update.sh";
done

Shortest Code for a Simple Calculator on Android

String RESULT;
String input = "(1+3)/4 * 2 - 7";
...
webSettings.setJavaScriptEnabled(true);
...
webView.addJavascriptInterface(new JavaScriptInterface() {
   public void returnResult(String o) {
       RESULT = o;
   }}, "JavaCallback"));
webView.loadUrl("javascript:window.JavaCallback"
   + ".returnResult("+input+")");
// now RESULT is -5

Is there a shorter one? BTW: this is only a sketch not sure if I miss a bracket somewhere …

Logitech USB Headset volume is low [Ubuntu] – Fixing this with a script

There is a nasty bug in Ubuntu – even in 10.* and 11.*! But there is a simple fix via alsamixer. The problem is that the volume is always near 0 when when plugging in the device again. So this mini post is how to find out the command to increase the volume which you can execute e.g. on startup. First, you need to find out which card the device has in my case its number 1, then you need to list the controls:

$ amixer -c 1 scontrols
Simple mixer control ‘Speaker’,0
Simple mixer control ‘Mic’,0

As last step increase the volume of one or more control:
$ amixer -c 1 sset ‘Speaker’,0 90% 90%
Simple mixer control ‘Speaker’,0
Capabilities: pvolume pswitch pswitch-joined penum
Playback channels: Front Left – Front Right
Limits: Playback 0 – 44
Mono:
Front Left: Playback 40 [91%] [2.72dB] [on]
Front Right: Playback 40 [91%] [2.72dB] [on]

Now there is only one problem: how to automatically switch from 0 to my USB device 1? Here is the solution

pacmd list-sinks | grep index
pacmd set-default-sink

or the full script:
amixer -c 1 sset ‘Speaker’,0 70% 70%
amixer -c 1 sset ‘Mic’,0 70% 70%

# switch mic
sources=($(pacmd list-sources | grep index | awk ‘{ if ($1 == “*”) print “1”,$3; else print “0”,$2 }’))
[[ ${sources[0]} = 0 ]] && swap=${sources[1]} || swap=${sources[5]}
echo $swap
pacmd set-default-source $swap &> /dev/null

# switch audio
sinks=($(pacmd list-sinks | grep index | awk ‘{ if ($1 == “*”) print “1”,$3; else print “0”,$2 }’))
[[ ${sinks[0]} = 0 ]] && swap=${sinks[1]} || swap=${sinks[3]}
pacmd set-default-sink $swap &> /dev/null

Now have a look here where it is described how to call this script when the device is plugged in.

Jetslide uses ElasticSearch as Database

GraphHopper – A Java routing engine

karussell ads

This post explains how one could use the search server ElasticSearch as a database. I’m using ElasticSearch as my only data storage system, because for Jetslide I want to avoid maintenance and development time overhead, which would be required when using a separate system. Be it NoSQL, object or pure SQL DBs.

ElasticSearch is a really powerfull search server based on Apache Lucene. So why can you use ElasticSearch as a single point of truth (SPOT)? Let us begin and go through all – or at least my – requirements of a data storage system! Did I forget something? Add a comment 🙂 !

CRUD & Search

You can create, read (see also realtime get), update and delete documents of different types. And of course you can perform full text search!

Multi tenancy

Multiple indices are very easy to create and to delete. This can be used to support several clients or simply to put different types into different indices like one would do when creating multiple tables for every type/class.

Sharding and Replication

Sharding and replication is just a matter of numbers when creating the index:

curl -XPUT 'http://localhost:9200/twitter/' -d '
index :
    number_of_shards : 3
    number_of_replicas : 2'

You can even update the number of replicas afterwards ‘on the fly’. To update the number of shards of one index you have to reindex (see the reindexing section below).

Distributed & Cloud

ElasticSearch can be distributed over a lot of machines. You can dynamically add and remove nodes (video). Additionally read this blog post for information about using ElasticSearch in ‘the cloud’.

Fault tolerant & Reliability

ElasticSearch will recover from the last snapshot of its gateway if something ‘bad’ happens like an index corruption or even a total cluster fallout – think time machine for search. Watch this video from Berlin Buzz Words (minute 26) to understand how the ‘reliable and asyncronous nature’ are combined in ElasticSearch.

Nevertheless I still recommend to do a backup from time to time to a different system (or at least different hard disc), e.g. in case you hit ElasticSearch or Lucene bugs or at least to make it really secure 🙂

Realtime Get

When using Lucene you have a real time latency. Which basically means that if you store a document into the index you’ll have to wait a bit until it appears when you search afterwards. Altought this latency is quite small: only a few milliseconds it is there and gets bigger if the index gets bigger. But ElasticSearch implements a realtime get feature in its latest version, which makes it now possible to retrieve the object even if it is not searchable by its id!

Refresh, Commit and Versioning

As I said you have a realtime latency when creating or updating (aka indexing) a document. To update a document you can use the realtime get, merge it and put it back in the index. Another approach which avoids further hits on ElasticSearch, would be to call refresh (or commit in Solr) of the index. But this is very problematic (e.g. slow) when the index is not tiny.

The good news is that you can again solve this problem with a feature from ElasticSearch – it is called versioning. This an identical to the ‘application site’ optimistical locking in the database world. Put the document in the index and if it fails e.g. merge the old state with the new and try again. To be honest this requires a bit more thinking using a failure-queue or similar, but now I have a really good working system secured with unit tests.

If you think about it, this is a really huge benefit over e.g. Solr. Even if Solrs’ raw indexing is faster (no one really did a good job in comparing indexing performance of Solr vs. ES) it requires a call of commit to make the documents searchable and slows down the whole indexing process a lot when comparing to ElasticSearch where you never really need to call the expensive refresh.

Reindexing

This is not necessary for a normal database. But it is crucial for a search server, e.g. to change an analyzer or the number of shards for an index. Reindexing sounds hard but can be easily implemented even without a separate data storage in ElasticSearch. For Jetslide I’m storing not single fields I’m storing the entire document as JSON in the _source. This is necessary to fetch the documents from the old index and put them into the newly created (with different settings).

But wait. How can I fetch all documents from the old index? Wouldn’t this be bad in terms of performance or memory for big indices? No, you can use the scan search type, which avoids e.g. scoring.

Ok, but how can I replace my old index with the new one? Can this be done ‘on the fly’? Yes, you can simply switch the alias of the index:

curl -XPOST 'http://localhost:9200/_aliases' -d '{
"actions" : [
   { "remove" : { "index" : "userindex6", "alias" : "userindex" } },
   { "add" : { "index" : "userindex7", "alias" : "uindex" } }]
}'

Performance

Well, ElasticSearch is fast. But you’ll have to determine for youself if it is fast enough for your use case and compare it to your existing data storage system.

Feature Rich

ElasticSearch has a lot of features, which you do not find in a normal database. E.g. faceting or the powerful percolator to name only a few.

Conclusion

In this post I explained if and how ElasticSearch can be used as a database replacement. ElasticSearch is very powerfuly but e.g. the versioning feature requires a bit handwork. So working with ElasticSearch is comparable more to the JDBC or SQL world not to the ORM one. But I’m sure there will pop up some ORM tools for ElasticSearch, although I prefer to avoid system complexity and will always use the ‘raw’ ElasticSearch I guess.

Introducing Jetslide News Reader

Update: Jetsli.de is no longer online. Checkout the projects snacktory and jetwick which were used in jetslide.

Flattr this

We are proud to announce the release of our Jetslide News Reader today! We know that there are a lot services aggregating articles from your twitter timeline such as the really nice tweetedtimes.com or paper.li. But as a hacker you’ll need a more powerful tool. You’ll need Jetslide. Read on to see why Jetslide is different and read this feature overview. By the way: yesterday we open sourced the content extractor called snacktory.

Jetslide is different …

… because it divides your ‘newspaper’ into easily navigatable topics and Jetslide prints articles from your timeline first! So you are following topics and not (only) people. See the first article which was referenced by a twitter friend and others, but it also prints articles from public. See the second article, where the highest share count (187) comes from digg. Click to view the reality of today or browse older content with the links under the articles:

Jetslide is smart …

… enough to skip duplicate articles and enhance your topics with related material. The relavance of every article is determined by an advanced algorithm (number of shares, quality, tweed, your browser language …) with the help of my database ElasticSearch – more on this in a later blog post.

And you can use a lot of geeky search queries to get what you want.

Jetslides are social

As pointed out under ‘Jetslide is different’ you’ll see articles posted in your twitter timeline first. But there is another features which make Jetslide more ‘social’. First, you get suggestions of users if they have the same or similar interests stored in their Jetslide. And second, Jetslide enables you to see others’ personal jetslide when adding e.g. the  parameter owner=timetabling to the url.

Jetslides means RSS 3.0

You can even use the boring RSS feed:

http://jetsli.de/rss/owner/timetabling/time/today

But this is less powerful. The recommended way to ‘consume’ your topics is via RSS 3.0 😉

Log in to Jetslide and select “Read Mode:Auto”. Then every time you hit the ‘next’ arrow (or CTRL+right) the current viewed articles will be marked as read and only newer articles will pop up the next time you slide through. This way you can slide through your topics and come back everytime you want: after 2 hours or after 2 days (at the moment up to 7 days). In Auto-Read-Mode you’ll always see only what you have missed and what is relevant!

This is the most important point why we do not call Jetslide a search engine but a news service.

Jetslides are easily shareable

… because a Jetslide is just an URL – viewable on desktops,  smartphones and even WAP browsers (left):

 

Snacktory – Yet another Readability clone. This time in Java.

For Jetslide I needed a readability Java clone. There are already some tools, but I wanted some more and other features so I adapted the existing goose and jreadability and added some stuff. Check out the detection quality at Jetslide and fork it to improve it – since today snacktory is free software 🙂 !

Copied from the README:

Snacktory
This is a small helper utility for pepole don’t want to write yet another java clone of Readability. In most cases, this is applied to articles, although it should work for any website to find its major area and extract its text and its important picture. Have a look into Jetslide where Snacktory is used. Jetslide is a new way to consume news, it does not only display the Websites’ title but it displays a small preview of the site (‘a snack’) and the important image if available.
License
The software stands under Apache 2 License and comes with NO WARRANTY
Features
Snacktory borrows some ideas from jReadability and goose (ideas + a lot test cases)
The advantages over jReadability are
  • better article text detection than jReadability
  • only Java deps
  • more tests
The advantages over Goose are
  • similar article text detection although better detection for none-english sites (German, Japanese, …)
  • snacktory does not depend on the word count in its text detection to support CJK languages
  • no external Services required to run the core tests => faster tests
  • better charset detection
  • with caching support
  • skipping some known filetypes
The disadvantages to Goose are
  • only the detection of the top image and the top text is supported at the moment
  • some tests which passed do not pass. But added a bunch of other useful sites (stackoverflow, facebook, other languages …)
Usage
HtmlFetcher fetcher = new HtmlFetcher();
// set cache. e.g. take the map implementation from google collections:
// fetcher.setCache(new MapMaker().concurrencyLevel(20).
 //               maximumSize(count).expireAfterWrite(minutes, TimeUnit.MINUTES).makeMap();
JResult res = fetcher.fetchAndExtract(url, resolveTimeout, true);
res.getText(); res.getTitle(); res.getImageUrl();

How to backup ElasticSearch with rsync

Although there is a gateway feature implemented in ElasticSearch which basically recovers your index on start if it is corrupted or similar it is wise to create backups if there are bugs in Lucene or ElasticSearch (assuming you have set the fs gateway). The backup script looks as follows and uses the possibility to enable and disable the flushing for a short time:

# TO_FOLDER=/something
# FROM=/your-es-installation

DATE=`date +%Y-%m-%d_%H-%M`
TO=$TO_FOLDER/$DATE/
echo "rsync from $FROM to $TO"
# the first times rsync can take a bit long - do not disable flusing
rsync -a $FROM $TO

# now disable flushing and do one manual flushing
$SCRIPTS/es-flush-disable.sh true
$SCRIPTS/es-flush.sh
# ... and sync again
rsync -a $FROM $TO

$SCRIPTS/es-flush-disable.sh false

# now remove too old backups
rm -rf `find $TO_FOLDER -maxdepth 1 -mtime +7` &> /dev/null

E.g. you could call the backup script regularly (even hourly) from cron and it will create new backups. By the way – if you want to take a look on the settings of all indices (e.g. to check the disable flushing stuff) this might be handy:

curl -XGET 'localhost:9200/_settings?pretty=true'

Here are the complete scripts as gist which I’m using for my jetslide project.

ElasticSearch vs. Solr #lucene

GraphHopper – A Java routing engine

karussell ads

I prepared a small presentation of ‘Why one should use ElasticSearch over Solr’ **

There is also a German article available in the iX magazine which introduces you to ElasticSearch and takes several aspects to compare Apache Solr and ElasticSearch.

**

This slide is based on my personal opinion and experience with my twitter search jetwick and my news reader jetslide. It should not be used to show that Solr or ElasticSearch is ‘bad’.