Longest Common Substring Algorithm in Java

For jetwick I needed yet another string algorithm and stumbled over this cool and common problem: trying to find the longest substring of two strings. Be sure that you understand the difference to the LC sequence problem.

For example if we have two strings:

Please, peter go swimming!

and

I’m peter goliswi

The algorithm should print out ‘ peter go’. The longest common substring algorithm can be implemented in an efficient manner with the help of suffix trees.

But in this post I’ll try to explain the bit less efficient ‘dynamic programming‘ version of the algorithm. Dynamic programming means that you can reuse already calculated information in a later step or you break the algorithm into parts to reuse information. To understand the algorithm you just need to fill the entries of an integer-array with the lengths of the identical substrings. Assume we use i for the horizontal string (please …) and j for the vertical string. Then the algorithm hits at some time i=19 and j=0 for one identical character ‘i’. Then the line

num[i][j] = 1;

is executed and saves the lengths of the 1 length identical substring.

  please, peter go swimming
i 0000000000000000000100100
' 0000000000000000000000000
m 0000000000000000000011000
  0000000100000100100000000
p 1000000020000000000000000
e 0010010003000000000000000
t 0000000000400000000000000
e 0010010001050000000000000
r 0000000000006000000000000
  0000000100000700100000000
g 0000000000000080000000000
o 0000000000000009000000000
l 0100000000000000000000000
i 0000000000000000000100100
s 0001000000000000010000000
w 0000000000000000002000000
i 0000000000000000000300100

Later on it hits the m characters and saves 1 two times to the array but then at i=7 and j=3 it starts our substring and saves 1 for the space character. Then some loops later it reaches i=8 and j=4  Now it reuses the already calculated “identical-length” of 1. It will do:

num[8][4] = 1 + num[7][3];

and we get 2. So, we now know we have a substring with two 2 characters. And with

if (num[i][j] > maxlen)

we make sure that we overwrite the existing longest substring (stored in the StringBuilder) ONLY IF there is a longer substring found and either append the character (if it is the current substring in progress):

sb.append(str1.charAt(i));

or we can start a longer substring. See the java code (mainly from wikipedia) for yourself:

public static String longestSubstring(String str1, String str2) {

StringBuilder sb = new StringBuilder();
if (str1 == null || str1.isEmpty() || str2 == null || str2.isEmpty())
  return "";

// ignore case
str1 = str1.toLowerCase();
str2 = str2.toLowerCase();

// java initializes them already with 0
int[][] num = new int[str1.length()][str2.length()];
int maxlen = 0;
int lastSubsBegin = 0;

for (int i = 0; i < str1.length(); i++) {
for (int j = 0; j < str2.length(); j++) {
  if (str1.charAt(i) == str2.charAt(j)) {
    if ((i == 0) || (j == 0))
       num[i][j] = 1;
    else
       num[i][j] = 1 + num[i - 1][j - 1];

    if (num[i][j] > maxlen) {
      maxlen = num[i][j];
      // generate substring from str1 => i
      int thisSubsBegin = i - num[i][j] + 1;
      if (lastSubsBegin == thisSubsBegin) {
         //if the current LCS is the same as the last time this block ran
         sb.append(str1.charAt(i));
      } else {
         //this block resets the string builder if a different LCS is found
         lastSubsBegin = thisSubsBegin;
         sb = new StringBuilder();
         sb.append(str1.substring(lastSubsBegin, i + 1));
      }
   }
}
}}

return sb.toString();
}

Viewing hprof from android with jvisualvm

  1. Add an additional permission to your app
    <uses-permission android:name=”android.permission.WRITE_EXTERNAL_STORAGE” />
    to your manifest
  2. Create hprof
    protected void onDestroy() {
    super.onDestroy();
    try {
    Debug.dumpHprofData(“/sdcard/data.hprof”);
    } catch (Exception e) {
    Log.e(“xy”, “couldn’t dump hprof”);
    }
    }
    or alternatively create a hprof file with: adb shell ps | grep yourpackage; adb shell kill -10 pid
  3. Get the hprof file
    android-sdk-linux_x86/platform-tools/adb pull /sdcard/data.hprof /tmp/
  4. Convert the hprof to sun standard format
    android-sdk-linux_x86/tools/hprof-conv /tmp/search.hprof /tmp/search.st.hprof
  5. Open hprof with /usr/lib/jvm/java-6-sun/bin/jvisualvm
    File -> Load -> Head dumps (hprof)

 

Avoid memory leaks -> take a look at the trackbacks!

3D Rotation in Gimp

  1. Erstelle eine zusätzliche transparente Ebene
  2. Selektiere nun die Ebene die 3D rotiert werden soll
  3. Gehe zu Filter->Abbilden->Auf Objekt Abbilden
  4. Wähle auf Quader abbilden
  5. Klicke ‘transparenter Hintergrund’
  6. Gehe zu Tab ‘Quader’. Eine Seite bekommt Ebene aus dem 1. Schritt. Alle anderen bekommen die Ebene aus dem 2. Schritt.
  7. Gehe zu Ausrichtung->Rotation und verändere wie gewünscht

 

Twitter API and Me

I have a love hate relationship with Twitter. As a user I see the benefits of Twitter, when looking at it without the spam, duplicates and senseless tweets e.g. through jetwick. But as a developer the Twitter API is very ‘heuristic’ and handwaving in a lot areas and makes it complicated to use. I would have been lost without the nice twitter4j project, so thanks to the author!

Now let me give you some examples of

Strange things of the Twitter API

  • The since id attribute is not supported when paginating in the search API:
    “The since_id parameter will be removed from the next_page element as it is not supported for pagination. If since_id is removed a warning will be added to alert you.”
    So you need to create your own pagination when you do not want to get already visited tweets via search API
  • Search API returns matches in URLs. This is in nearly all cases not useful. Especially for terms like ‘twitter’ or ‘google’ where the search API returns confusing tweets containing URLs search.twitter.com or google.com. But marketing companies need to search URLs and also the tweet button also relies on that ‘feature’, why not disable that and enable ‘link:http://any-link.here&#8217; ? And it would be more useful to match against the title of the website like jetwick it does, but that’s another topic.
  • Search API does NOT return complete results compared to streaming API. I.e. results from streaming API contains all tweets with the specified keywords (without tweets via the URL bug I mentioned in the previous point). But the search API in contrast can leave out ‘spam’ tweets. I’m unsure if those tweets has to be really low quality or whatever. I guess this is more a technically issue with the search API that it leaves out some tweets the streaming has.
  • REST API allows one to get only ~3200 old tweets from one user and 800 tweets from your friends (i.e. your homeline).
  • Huge amount of different API limits:
    • 350 requests per hour and user for the REST API
    • Searches are restricted to IP (unknown number much higher than the 350 requests per hour)
    • Only 2 filter streams are allowed – this is restricted to the IP. And only 200 keywords are possible per stream! But filter streams allow only approx. 50 tweets/s even if only a few keywords are used. (Then those keywords are high frequent)
    • Search API allows searches into history, but how long depends on the frequency of the term. I know this is logically for every real time inverted index of this size, but should be better documented.

Regarding API Terms

Of course Twitter has API terms. This is necessary and nice to prevent the users from spam sites etc.

But there is also a display style guideline, which I had ‘fun’ the last weekend. Where I was asked e.g. to make the hashtag links of jetwick according to the display guideline. This is annoying. Now I need to pop up a dialog instead of directly triggering a search on jetwick – hey, it is a search engine! But twitter has to make money. That is ok. But I would like to have an exception for free or open source projects. No chance 😦 … here is my email conversation regarding the minor API term violation:

Dear XY,
ok, I won't provide an API to others. Thanks for the clarification.

I've got a further question. Are the display guidelines a requirement to
be aligned with the API terms of use and to continue running Jetwick? (I
shutted it down to not being evil)

In the terms I can read as the first principle: "Don't surprise users"
which is very important for me and it would disturb the user experience
if a hashtag click (or a click on '@user') in a tweet would result in a
pop up to twitter search or something and not simply trigger a search on
jetwick.

Please do not understand me wrong, I have already several links back to
twitter: the date links to the tweet on twitter, the retweet and reply
links to twitter and finally the user links back to twitter. Jetwick is
a complete read only service (see my API access), so I would be stupid
if I hadn't links back to twitter, which actually allows my users to
share noisefree information via twitter.

Finally: If the layout guides are a requirement, would you make an
exception for Jetwick regarding the hashtag and @user links within a
tweet? Many companies make exceptions when it comes to open source
projects such as Jetbrains (IDEA), Yourkit (Profiler), Attlassian
(Confluence), ... what about Twitter?

Kind Regards,
Peter.

The answer from twitter is crystal clear that Twitter does not provide API term exceptions to open source projects like other companies does. It also indicates that the API guys have a bit too much to do as the support does not really answer my question and neither understands what github is nor what jetwick means:

Hey Peter,

Thanks for following up. The API Terms of Service, as an overriding
document, do require you to adhere to these display guidelines -- in the
same "Don't Surprise Users" section you referenced. I recommend
adding links of your own, such as "#github on Jetwick" that surface
these results. Again, I'm sorry for the inconvenience this has caused,
and let me know if you have any other questions.

Regards,
XY

A second important thing

you’ll otherwise miss is that you are not allowed to offer an API to other people. Even if your project is open source! Here the email:
“Returning Twitter data, like tweets, through an API of your own is not allowed, neither for commercial services nor independent or open-source services. We are not looking for partners to formally extend new APIs as you request.

Conclusion

So, keep this all in mind when you start to build a system using or even relying on the Twitter API. I hope this post clarifies the mystics of the Twitter API a bit! If you have encountered similar issues: feel free to comment 🙂 !

Twitter Search Tools and more. #Archive #FriendSearch #Trends

There is an overwhelming number of tools for twitter: url shorteners like bit.ly, web clients like hootsuite.com but today I would like to show you twitter tools which are good to search tweets, lets you archive them, display trends and more. I picked tools which could be useful when you are looking only for relevant information without noise. Let me know your favourites to get news out of Twitter!

For the Twitter search tools I always made a quick test to get a feeling how much noise these tools can filter away and only two tools out of a dozen – see below – made it easy to find the following news one day later:

The tools are

  1. Jetwick – Free to use (Caution: I’m the developer)
  2. Research.ly – Freemium (Caution: it required a bit selecting of the appropriated days to got the news)

Twazzup and What The Trend also showed the news I was looking for, but the news weren’t in their displayed tweets – they also display blogs and google news in a separate widget :). To my surprise in SocialMention and Bing Social Search the security news didn’t pop up, but other news important to java developers occur. So, give it a shot. The problem for the other search engines was that they mostly cover the real time tweets only and do not provide a lot of useful filters and some of the search tools simply were not designed for this task.

I’ve splitted the tools in the following subgroups:

  1. Searching & Archiving
  2. Searching
    • The Giants under the Twitter Searches
  3. Archiving
  4. Cool Tools

1 Searching & Archiving

The Archivist

  • Archive a search; Trending URLs; Top Users
  • Alexa rank: 13k
  • Login required to archive, Free to use

Tweet Nest

Jetwick

  • Open Twitter Search without Noise; Sort against retweets; Lets you show only relevant tweets since the last login; Archiving and Searching of any users’ tweets; Search friends only; Filters for duplicate reduction, language or a distinct day
    More Features …
  • Open Source which makes it suitable to do your own research on twitter
  • Alexa rank: 400k
  • Login optional, Free to use

2 Searching

Topsy

IceRocket

  • Searches Blogs, Twitter, MySpace, News, Images, …
  • Alexa rank: 5k
  • Without Login, Free to use

SocialMention

  • Searches Blogs, Twitter, … Every Search shows Top Keywords and Users
  • Alexa rank: 7k
  • Without Login, Free to use

WeFollow

Twellow

  • Twitter directory (‘Twitter yellow pages’)
  • Alex rank: 9k
  • Login optional, Free to use

Trendistic / HashTags

What The Trend

StateOfSearch

PubSub

Twazzup

Research.ly

  • Search ‘historic’ tweets; Search local business; Map the relationships between you and other users
  • Created from the
  • Alexa rank: 88k
  • Login required,Freemium

Twips

TweetScan

Searchtastic

TwitterLocal

  • Search local business
  • Alexa rank: 331k
  • Does not work at the moment?? As of Feb 2011

SnapBird

TwimeMachine

Tweetzi

TweeFind

  • Twitter Search which shows related search
  • Alexa rank: 1600k
  • Login optional, Free to use

Twippr

  • Search within your friends’ tweets
    More Features?
  • Alexa rank: 1800k
  • Login required, Free to use

Sparrw

The Giants under the Twitter Searches

3 Archiving

BackTweets

Twapper Keeper

TweetBackup

Tweetake

BackupMyTweets

4 Cool Tools

FavStar

  • Search Twitter Users; Up vote users (not tweets)
  • Alexa rank: 6k
  • Login optional, Freemium

Tweepi

TweetStats

  • Trends; Stats for your account
  • Alexa rank: 21k
  • Without login, Free to use

Twitaholic ( TwitterCounter )

  • Most popular users and twitter stats
  • Alexa rank: 23k
  • Login optional, Free to use

What the HashTag?

  • User-editable encyclopedia for hashtags found on Twitter – this way you can find the meaning of a hashtag – Similar to find origin of jetwick
  • Alexa rank: 45k
  • Login optional, Free to use

TweetBeep

Twitturly

Mixero

The Cadmus

TweetMeme

Twitter Power Search

  • Multi Widget View; Determining Trends; filter Audio/Video
  • Alexa rank: 1110k
  • Without login, Free to use

Java Tweets of the last Week, 14th February #Lucene #ElasticSearch #Solr

Now as jetwick stabilizes again here are some Java tweets which are easy to collect also for other topics like lucene (see below), netbeans etc:

  1. Search for the term you are interested in. If you want to make sure jetwick collects all results from one week: login and save your search.
  2. Then select the first (2nd, 3rd, …) day of the week or page through the results which are sorted against retweets
  3. Click on Export as ‘html’ under the results to copy & paste some of the tweets into your document.
    Then click browsers’ back-button and proceed with the next day or page. Ask me if you have problems…

Here are the collected tweets – did I missed some important news?

Java & Search: Lucene / Solr / ElasticSearch

Why Jetwick moved from Solr to ElasticSearch

I like both technologies Solr and ElasticSearch and a lot work is going into both. So, let me explain why I choose to migrate from Solr to ElasticSearch (ES).

What is elastic?

  • ES lets you add and remove nodes [Video] and the requests will be handled from the correct node. Nodes will even do ‘zero config’ discovery.
    To scale if the load increases you can use replicas. ElasticSearch will automatically play the loadbalancer and choose the appropriated node.
  • ES lets you scale if data amount increase, because then you can easily use sharding: it’s just a number in ES (either via API or via configuration).

With that features ES is well prepared for the century of the cloud [Blog]!

What’s the difference to Solr?

Solr wasn’t designed from the ground up with the ‘cloud’ in mind, but of course you can do sharding, use replication and use multiple cores with Solr. It’s just a bit more complicated.

When using Solr Cloud and ZooKeeper this gets better. You’ll also need to invest some time to make Solr near real time to be comparable with ES. This all seemed to be a bit too tricky to me (in Dec 2010) and I don’t have any time for administration work in my free time e.g. to set up backups/replicas, add shards/indices, …

Other Options?

What are my other options? There is Xapian, Sphinx etc. But  only the following two projects fullfilled my requirements:

  1. Using Solandra or
  2. Moving from Solr to ElasticSearch

I wanted a lucene based solution and a solution where it works out of the box to shard and create indices. I simply wanted more data from Twitter available in Jetwick.

The first option is very nice, no changes to your code are required – only a minor change in your solrconfig.xml and you will get a distributed and real time Solr! So, I tried Solandra and after a lot support from Jake (Thanks!) I got it running with Jetwick! But at the end I still had performance issues with my indexing strategy, so I tried – in parallel – the second step.

What are the advantages of ElasticSearch?

To be honest Jetwick doesn’t really need to be elastic – I’m only using the sharding feature at the moment as I don’t own capacity on a cloud. BUT ElasticSearch is also elastic in a different area: ES lets you manage indices very very easy! A clever thing in ES is that you don’t define the document structure in an index like you do in Solr – no, you define types and then create documents of a specific type in a specific index. And documents in ES don’t need to be flat – they can be nested as they are pure JSON.

That and the ‘elasticity’ could make ES suitable as a hip NoSql storage 😉

Another advantage over Solr is the near real time behaviour, which you’ll get at no costs when switching to ES.

The Move!

Moving to ElasticSearch with Jetwick wasn’t that easy as I hoped. Although I’m sure one can make a normal migration in one day with my experience now ;). It took a lot of time to understand the new technology and more importantly to migrate my UI code where I made too much use to construct a SolrQuery object. At the end I created a custom Solr2ElasticHelper utility to avoid this clumsy work at the beginning. And at some day I will fully migrate even this code. This is now migrated to my own query object which makes it easy for me to add and remove filters etc.

When moving to ElasticSearch be sure that it supports all feature Solr has. Although Shay works really hard to integrate new features into ES he cannot do all the work alone! E.g. I had to integrate Solrs’ WordDelimiterFilter, but this wasn’t that difficult – just copy & paste; plus some configuration.

ES uses netty under the hood – no other webserver is necessary. Just start the node either via API or in directly via bin/elasticsearch and then query the node via curl or the browser. For example you can use the nice ElasticSearch Head project:

or ElasticSearch-JS which are equivalents to the Solr admin page. To add a node simply start another ES instance and they will automagically discover each other. You can also use curl on the command line to query and feed the index as documented in the REST API documentation.

No technology is perfect so keep in mind the following disadvantages which will disappear over time in my opinion:

  • Solr has more Analyzers, Filters, etc., but it is relative easy to use them in ES as well.
  • Solr has a larger community, a larger user base and more companies offering professional support
  • Solr has better documentation and more books. Regarding the docs of ES: they are moving now to the github wiki and the docs will now improve IMO.
  • Solr has more tooling e.g. solrmonitor, LucidGaze and newrelic, but you still have yourkit and jvisualvm 😉

But keep in mind also the following unmentioned notes:

  • Shay fixes bugs very quickly!
  • ElasticSearch has a more recent Lucene version and releases more frequently
  • It is very easy to contribute via github (just a pull request away ;))

To get introduced into ElasticSearch you can read this article.

Get Started with ElasticSearch and Wicket

GraphHopper – A Java routing engine

karussell ads

This article will show you the most basic steps required to make ElasticSearch working for the simplest scenario with the help of the Java API – it shows installing, indexing and querying.

1. Installation

Either get the sources from github and compile it or grab the zip file of the latest release and start a node in foreground via:

bin/elasticsearch -f

To make things easy for you I have prepared a small example with sources derived from jetwick where you can start ElasticSearch directly from your IDE – e.g. just click ‘open projects’ in NetBeans run then start from the ElasticNode class. The example should show you how to do indexing via bulk API, querying, faceting, filtering, sorting and probably some more:

To get started on your own see the sources of the example where I’m actually using ElasticSearch or take a look at the shortest ES example (with Java API) in the last section of this post.

Info: If you want that ES starts automatically when your debian starts then read this documentation.

2. Indexing and Querying

First of all you should define all fields of your document which shouldn’t get the default analyzer (e.g. strings gets analyzed, etc) and specify that in the tweet.json under the folder es/config/mappings/_default

For example in the elasticsearch example the userName shouldn’t be analyzed:

{ "tweet" : {
   "properties" : {
     "userName": { "type" : "string", "index" : "not_analyzed" }
}}}

Then start the node:

import static org.elasticsearch.node.NodeBuilder.*;
...
Builder settings = ImmutableSettings.settingsBuilder();
// here you can set the node and index settings via API
settings.build();
NodeBuilder nBuilder = nodeBuilder().settings(settings);
if (testing)
 nBuilder.local(true);

// start it!
node = nBuilder.build().start();

You can get the client directly from the node:

Client client = node.client();

or if you need the client in another JVM you can use the TransportClient:

Settings s = ImmutableSettings.settingsBuilder().put("cluster.name", cluster).build();
TransportClient tmp = new TransportClient(s);
tmp.addTransportAddress(new InetSocketTransportAddress("127.0.0.1", 9200));
client = tmp;

Now create your index:

try {
  client.admin().indices().create(new CreateIndexRequest(indexName)).actionGet();
} catch(Exception ex) {
   logger.warn("already exists", ex);
}

When indexing your documents you’ll need to know where to store (indexName) and what to store (indexType and id):

IndexRequestBuilder irb = client.prepareIndex(getIndexName(), getIndexType(), id).
setSource(b);
irb.execute().actionGet();

where the source b is the jsonBuilder created from your domain object:

import static org.elasticsearch.common.xcontent.XContentFactory.*;
...
XContentBuilder b = jsonBuilder().startObject();
b.field("tweetText", u.getText());
b.field("fromUserId", u.getFromUserId());
if (u.getCreatedAt() != null) // the 'if' is not neccessary in >= 0.15
  b.field("createdAt", u.getCreatedAt());
b.field("userName", u.getUserName());
b.endObject();

To get a document via its id you do:

GetResponse rsp = client.prepareGet(getIndexName(), getIndexType(), "" + id).
execute().actionGet();
MyTweet tweet = readDoc(rsp.getSource(), rsp.getId());

Getting multiple documents at once is currently not supported via ‘prepareGet’, but you can create a terms query with the indirect field ‘_id’ to achieve this bulk-retrieving. When updating a lots of documents there is already a bulk API.

In test cases after indexing you’ll have to make sure that the documents are actually ‘commited’ before searching (don’t do this in production):

RefreshResponse rsp = client.admin().indices().refresh(new RefreshRequest(indices)).actionGet();

To write tests which uses ES you can take a look into the source code how I’m doing this (starting ES on beforeClass etc).

Now let use search:

SearchRequestBuilder builder = client.prepareSearch(getIndexName());
XContentQueryBuilder qb = QueryBuilders.queryString(queryString).defaultOperator(Operator.AND).
   field("tweetText").field("userName", 0).
   allowLeadingWildcard(false).useDisMax(true);
builder.addSort("createdAt", SortOrder.DESC);
builder.setFrom(page * hitsPerPage).setSize(hitsPerPage);
builder.setQuery(qb);

SearchResponse rsp = builder.execute().actionGet();
SearchHit[] docs = rsp.getHits().getHits();
for (SearchHit sd : docs) {
  //to get explanation you'll need to enable this when querying:
  //System.out.println(sd.getExplanation().toString());

  // if we use in mapping: "_source" : {"enabled" : false}
  // we need to include all necessary fields in query and then to use doc.getFields()
  // instead of doc.getSource()
  MyTweet tw = readDoc(sd.getSource(), sd.getId());
  tweets.add(tw);
}

The helper method readDoc is simple:

public MyTweet readDoc(Map source, String idAsStr) {
  String name = (String) source.get("userName");
  long id = -1;
  try {
     id = Long.parseLong(idAsStr);
  } catch (Exception ex) {
     logger.error("Couldn't parse id:" + idAsStr);
  }

  MyTweet tweet = new MyTweet(id, name);
  tweet.setText((String) source.get("tweetText"));
  tweet.setCreatedAt(Helper.toDateNoNPE((String) source.get("createdAt")));
  tweet.setFromUserId((Integer) source.get("fromUserId"));
  return tweet;
}

When you want that the facets will be return in parallel to the search results you’ll have to ‘enable’ it when querying:

facetName = "userName";
facetField = "userName";
builder.addFacet(FacetBuilders.termsFacet(facetName)
   .field(facetField));

Then you can retrieve all term facet via:

SearchResponse rsp = ...
if (rsp != null) {
 Facets facets = rsp.facets();
 if (facets != null)
   for (Facet facet : facets.facets()) {
     if (facet instanceof TermsFacet) {
         TermsFacet ff = (TermsFacet) facet;
         // => ff.getEntries() => count per unique value
...

This is done in the FacetPanel.

I hope you now have a basic understanding of ElasticSearch. Please let me know if you found a bug in the example or if something is not clearly explained!

In my (too?) small Solr vs. ElasticSearch comparison I listed also some useful tools for ES. Also have a look at this!

3. Some hints

  • Use ‘none’ gateway for tests. Gateway is used for long term persistence.
  • The Java API is not well documented at the moment, but now there are several Java API usages in Jetwick code
  • Use scripting for boosting, use JavaScript as language – most performant as of Dec 2010!
  • Restart the node to try a new scripting language
  • Use snowball stemmer in 0.15 use language:English (otherwise ClassNotFoundException)
  • See how your terms get analyzed:
    http://localhost:9200/twindexreal/_analyze?analyzer=index_analyzer “this is a #java test => #java + test”
  • Or include the analyzer as a plugin: put the jar under lib/ E.g. see the icu plugin. Be sure you are using the right guice annotation
  • You set port 9200 (-9300) for http communication and 9300 (-9400) for transport client.
  • if you have problems with ports: make sure at least a simple put + get is working via curl
  • Scaling-ElasticSearch
    This solution is my preferred solution for handling long term persistency of of a cluster since it means
    that node storage is completely temporal. This in turn means that you can store the index in memory for example,
    get the performance benefits that comes with it, without scarifying long term persistency.
  • Too many open files: edit /etc/security/limits.conf
    user soft nofile 15000
    user hard nofile 15000
    ! then login + logout !

4. Simplest Java Example

import static org.elasticsearch.node.NodeBuilder.*;
import static org.elasticsearch.common.xcontent.XContentFactory.*;
...
Node node = nodeBuilder().local(true).
settings(ImmutableSettings.settingsBuilder().
put("index.number_of_shards", 4).
put("index.number_of_replicas", 1).
build()).build().start();

String indexName = "tweetindex";
String indexType = "tweet";
String fileAsString = "{"
+ "\"tweet\" : {"
+ "    \"properties\" : {"
+ "         \"longval\" : { \"type\" : \"long\", \"null_value\" : -1}"
+ "}}}";

Client client = node.client();
// create index
client.admin().indices().
create(new CreateIndexRequest(indexName).mapping(indexType, fileAsString)).
actionGet();

client.admin().cluster().health(new ClusterHealthRequest(indexName).waitForYellowStatus()).actionGet();

XContentBuilder docBuilder = XContentFactory.jsonBuilder().startObject();
docBuilder.field("longval", 124L);
docBuilder.endObject();

// feed previously created doc
IndexRequestBuilder irb = client.prepareIndex(indexName, indexType, "1").
setConsistencyLevel(WriteConsistencyLevel.DEFAULT).
setSource(docBuilder);
irb.execute().actionGet();

// there is also a bulk API if you have many documents
// make doc available for sure – you shouldn't need this in production, because
// the documents gets available automatically in (near) real time
client.admin().indices().refresh(new RefreshRequest(indexName)).actionGet();

// create a query to get this document
XContentQueryBuilder qb = QueryBuilders.matchAllQuery();
TermFilterBuilder fb = FilterBuilders.termFilter("longval", 124L);
SearchRequestBuilder srb = client.prepareSearch(indexName).
setQuery(QueryBuilders.filteredQuery(qb, fb));

SearchResponse response = srb.execute().actionGet();

System.out.println("failed shards:" + response.getFailedShards());
Object num = response.getHits().hits()[0].getSource().get("longval");
System.out.println("longval:" + num);