Friday, December 16, 2016

Java HashMap - Infinite loop; another reason not use it in a non-thread safe way!

We are aware that HashMap in Java is not thread safe. Multithreaded usage may cause corruption of the data structures used by HashMap. This may result in loss of keys or incorrect values for keys.

I realized it's not just limited to that, following is the issue we faced in one of our legacy applications. The hash map (non-thread safe) usage resulted in an infinite loop.

I had been looking at CPU usage of the application, running on a 16 core box. CPU usage was 400%; 4 cores of 16 cores were busy. There was absolute zero external load on it. The top output of the same, is shown below

Following is the thread stack of the busy threads.

All of them are stuck in HashMap. The hash map put is in infinite loop.

JDK src code for HashMap

Review of the line 494 in JDK src code (thread stack shows the same) along with the excellent blog, the cause for the infinite loop can be understood.

Tuesday, September 01, 2015

A case for Bloom Filter

We provide cache as service to our users. Users can put elements into cache with a TTL. We needed to determine the performance of the cache in terms of the hit rate.

The hit rate can be low because of two factors - LRU in the cache and TTL of the element put by user. We needed to ensure that the hit rate is not low because of LRU - in that case we were ready to allocate more memory for the cache.

One of the ways to determine if the TTL caused removal of the element is to track all the keys in the the cache even if the elements were removed. 

Bloom filter is a good data structure to track the keys. We keep adding keys to the bloom filter. When a cache miss occurs, we check the bloom filter if the key was ever present. If it was, then the cache miss was caused by TTL. If it was not present then it was a real cache miss. 

Bloom filter will give an affirmative answer if an element was not present. Its probabilistic answer if the element was present - in our case this is acceptable since we will count additional cache miss due to TTL. It is acceptable because it is easy to tune TTL than cache.

Also we are adding elements for tracking and no removals - which also matches with the requirement of the bloom filter.

In summary, Bloom filter worked well for our case compared to other alternate set data structures. We were able to determine presence of an element with minimal possible bits and tune our cache performance.

Sunday, August 09, 2015

Port Unification with Netty in Proxy Mode

Serving requests - both HTTP and HTTPS on the same port had been a requirement in my scenario.This is called port unification.  I was using Netty to serve requests. One of the standard ways to handle the scenario is to use the approach described in PortUnificationHandler.

However, in my case, the Netty implementation was mimicking the behaviour of a proxy server. This means that a client will make a CONNECT request before any actual data request is sent. CONNECT request is always non-SSL - even if the protocol to follow is HTTPS.

The approach to use in this scenario is as follows:

Implement a ChannelInboundByteHandlerAdapter;  on inbound buffer update - check if the request is CONNECT request. In case of CONNECT request, pass on the incoming bytes to the next handler in the chain.

If its not a CONNECT request, do the SSL data encryption check and add the SSL handler to the chain if necessary. Before passing on the bytes to the next handler, the unification handler should remove itself from the chain.

Monday, July 20, 2015

Threading Race Deque

Following code has been causing problems at a various times and has sent me debugging in a lot of unrelated areas.

private final NavigableSet scenarios = new ConcurrentSkipListSet()

public void fireCompletedScenarios() {
        while (!scenarios.isEmpty()) {
            InMemoryScenarioRecord firstScenario = scenarios.first();
            if (firstScenario.completed()) {
                synchronized (completedScenarios) {
            //exit when you find first uncompleted scenario

There is a navigable set of scenarios, ordered by scenario time. We pull out completed scenarios from it and add it to another queue. We break when we find the first incomplete scenario.

The issue I have been facing is - incomplete scenarios getting into the queue of completed scenarios. As usual, the case does not happen in local environment - happens in non-prod environments -under high load. Hence, it must be some race condition causing the issue.

Finally it stuck to me is the bug in the above code that manifests under high load. The bug is as follows:

The scenarios.first() just gives me access to the first scenario and does not remove it from the navigable set. After the completion check, pollFirst is called to remove it from the set.

This is the bug - under high load - the element returned by first and the element removed by pollFirst are not same!

There can be another incomplete scenario that came in to the set; to the top because it had lower scenario time; but arrived late into the set because of the asynchronicity that existed in this system. This resulted in adding a incomplete scenario returned by pollFirst to the completed queue!

huh! I fixed this over sight and learnt a valuable lesson !

Sunday, February 16, 2014

Farey Series

I have been managing to read the book Recreations in Theory of Numbers. A good informal book on Number Theory. Farey series is one of topics I wanted to write a program for. Following is the Groovy code of generating Farey series until the given value of the denominator.
def farey = {
    def a = 1, b = it, c = 1, d = it - 1;
    printf "%d/%d ", a, b
    while (c != 1 || d != 1) {
        printf "%d/%d ", c, d
        z = (int) ((it + b) / d)
        (a, b, c, d) = [c, d, z * c - a, z * d - b]


1/7 1/6 1/5 1/4 2/7 1/3 2/5 3/7 1/2 4/7 3/5 2/3 5/7 3/4 4/5 5/6 6/7 

Saturday, January 25, 2014

Learning OAuth 2.0

Source of truth for OAuth 2.0 is, of course, the RFC 6749. The language and explanation in RFC is very much comprehensible.
     +--------+                               +---------------+
     |        |--(A)- Authorization Request ->|   Resource    |
     |        |                               |     Owner     |
     |        |<-(B)-- Authorization Grant ---|               |
     |        |                               +---------------+
     |        |
     |        |                               +---------------+
     |        |--(C)-- Authorization Grant -->| Authorization |
     | Client |                               |     Server    |
     |        |<-(D)----- Access Token -------|               |
     |        |                               +---------------+
     |        |
     |        |                               +---------------+
     |        |--(E)----- Access Token ------>|    Resource   |
     |        |                               |     Server    |
     |        |<-(F)--- Protected Resource ---|               |
     +--------+                               +---------------+

OAuth 2.0 defines quite a few API endpoints. The RFC provides examples for the request and expected responses for these APIs.

One of the good ways to understand the RFC is build the OAuth endpoints and try out the samples mentioned in it. Apigee Edge support of OAuth 2.0 is a quick help here.

This github project oauth20_apigee contains the proxy and the postman client requests.

Apart from the RFC,following are two good resources about OAuth 2.0

Friday, December 13, 2013

brew install octave fails on OS X - Mountain Lion

Octave installation on Mountain Lion fails during the make install phase with the following error:
 make install
./plot.texi:3957: warning: node `Multiple Plot Windows' is prev for `Printing and Saving Plots' in menu but not in sectioning
 make[3]: *** [] Error 1
 make[2]: *** [install-recursive] Error 1
 make[1]: *** [install-recursive] Error 1
 make: *** [install] Error 2
The solution seems to be the manual patch indicated here:
In summary: --enable-docs=no makes it work.