I have been asked this question multiple times from various team members, which one to use in the project either SLF4j or LOG4j or both ? Its been long since SLF4j has been in picture and adopted heavily across but certain things never get over. Hence I thought of writing down my answer as a blog itself so others can refer to it from time to time, when needed.

Going back to the question SLF4j or LOG4j ? Here the question itself is wrong. SLF4j and LOG4j focus on different areas and they are not similar components. Please don’t compare them which one is better as they are meant to do two different things.

SLF4j is a logging facade, it doesn’t do logging by itself instead depends on the logging component like LOG4j, Logback or JLogging. SLF4j is an API designed to give generic access to many logging frameworks. So your log code within the application level remains same but the under lying logging framework can be switched without any kind of actual source code changes.

Once you get used to the syntax of SLF4j, then you don’t need to worry about syntax for different logging frameworks. Another major feature of SLF4j which convinced me to use over my long time favourite LOG4j, that is known as placeholder and represented as {} in code. Placeholder is pretty much same as %s in format() method of String, because it get substituted by actual string supplied at runtime. This not only reduce lot of String concatenation in your code, but also cost of creating String object. Since Strings are immutable and they are created in String pool, they consume heap memory and most of the time they are not needed e.g. a String used in DEBUG statement is not needed, when your application is running on ERROR level in production.

By using SLF4j, you can defer String creation at the runtime, which means only required Strings will be created. If you have been using LOG4j then you already familiar with a workaround of putting debug statement inside if() condition, but SLF4j placeholders are much better than that.

LOG4j Style:

 if (LOGGER.isDebugEnabled()) {
 LOGGER.debug("Initiating Batch Processing... RequestId: " + requestId + ", Region: " + region);

SLF4j Style:

LOGGER.debug("Initiating Batch Processing... RequestId: {}, Region: {}", requestId, region);

You might be thinking what if I have multiple parameters, well you can either use variable arguments version of log methods or pass them as Object array. It’s really convenient and efficient way of logging. Remember, before generating final String for logging the message, this method check if a particular log level is enabled or not, which not only reduce memory consumption but also CPU time involved for executing those String concatenation instruction in advance. It’s also worth knowing that logging has severe impact on performance of the application, and it’s always advised to have only mandatory logging in production environment.

Code Snippet from org.slf4j.impl.Log4jLoggerAdapter:

 public void debug(String format, Object arg1, Object arg2) {
    if (logger.isDebugEnabled()) {
       FormattingTuple ft = MessageFormatter.format(format, arg1, arg2);
       logger.log(FQCN, Level.DEBUG, ft.getMessage(), ft.getThrowable());


  1. SLF4j provides place holder based logging, which improves readability of code by removing checks like isDebugEnabled(), isInfoEnabled() etc.
  2. By using SLF4j logging method, you defer cost of constructing logging messages (String), which is both memory and CPU efficient.
  3. On a side note, less number of strings means less work for Garbage Collector, which means better throughput and performance for your application.
  4. Using SLF4j in your source will make it independent of any particular logging implementation i.e., no need to manage multiple logging configuration for multiple libraries.

So essentially, SLF4j does not replace LOG4j; they work together hand in hand. It removes the dependency on LOG4j from your application and make it easy to replace it in future with more capable library without any kind of source code changes.

By: Nataraj Srikantaiah

The Return Gift

News of the big online retailers revisiting their return policy has made headlines in all the leading business newspaper. And why not, given that one out of five shipments end up in the return bucket. There is considerable cost associated with return products and the last three years has seen a very liberal return policy.

Let’s rewind a little and look at what is online return. Any product which is returned by the customer post placing the order & the shipment leaving the source location is a return and the biggest concern is when the customer returns the product after receiving it.

The cost involved in bring back the product and the ability to put it back as “good inventory” is a nightmare for online retailers. Retailers in general are not alien to the concept of “returns” but the modus operandi of the online retail make return look like a little monster. In offline apparel retail, we have the concept of “Trial room” and the dresses which lie outside the trial room are a type of return. When the same is extrapolated to online return, the customer’s home becomes the trial room and to return the product, it needs to be sent back to the online retailers and this becomes tricky.

Return policy for many online retailers is like a see saw, on one side is the retailer themselves and on other is the customer. It is either perceived as a hassle for the retailer or the customer. A liberal return policy can seem to be a hassle for the retailer whereas a stringent return policy can make the life of a customer difficult. Any tool that is built for the retailer’s convenience at the expense of the customer is bound to fail. Thankfully unlike the egg and chicken situation, we have a silver bullet to nail it.

Returns can be classified as avoidable and unavoidable returns.

Cases like size issue, item not matching will fall under the category of avoidable returns and it makes perfect sense to minimize the avoidable returns, as their existence will have a negative impact on the customer experience. This can be reduced using analytics, improved website experience and disciplined supply chain processes.  

It is in handling unavoidable returns like fit issues, malfunctioning products where an organization’s return policy plays a vital role. It is wise to give the benefit of doubt to the customer, though it would be prudent to have category specific return policy in place. The entire return experience should be hassle free, right from placing the return request to the refund /replacement/exchange and returned product reutilization.

It is worth mentioning that return policy can be an effective marketing tool in itself and to a good extent reflects an organization’s customer obsession. Whatever the policy, retailers who understand that return is an inherent characteristic of the online retail business, will go away with the best return gift from the online birthday party.

By: Asish Neogy

A Conceptual ‘Performance Mode’ for Web Applications

What is it?


Dedicated mode to help Web Applications weather peak or unexpected Traffic.

Why do we need it?

Often times, performance heavy or resource intensive features bring a Web Application to it’s knees.

How would it work?

There needs to be a mechanism to intelligently turn off features or functionalities depending on their ‘PSR (Performance / Scalability / Reliability) cost’.

Step 1 – Measure


It is essential to weigh each feature set within the Application against their PSR Impact. Loosely this can be done by measuring the resource utilization impact of the specific feature – namely CPU, Memory, Disk and Network I/O’s.

Step 2 – Categorize


Split features into three ‘Categories’ or ‘Buckets’ based on their Performance Weight. Let us for example take ‘Low’, ‘Medium’ and ‘High’ impact categories.

Step 3 – Monitor and Switch


Continually monitor Traffic Usage in Production Environment. Once traffic reaches a known or tested limit (let us say 85% of Peak Capacity) disable ‘High’ and ‘Medium’ performance impact features whereby the Web Application now runs in a light mode consequently only allowing access to core features; thereby preventing downtime and also perhaps enabling reaction time for support teams (for example add additional server capacity to handle the excess load)

By: Manoj Mohanan

Pattern Matching in Scala

You must have used switch cases many times in your Java code. However, the case entry can only be integer or any scalar type. What if the case entries become more flexible? When I say flexible, I mean it to be string, object, primitive data types or combination of all. The other way of doing this is to write a series of if-else-if statements which some times becomes annoying.

If something like this

object example1 extends App {
   def matchTest (x: Any): Any = x match {
      case 1 => "one"
      case "two" => 2
      case y: Int => "scala.Int"

is a revolution for you, then Scala Pattern Matching is definitely a breath of fresh air!

Pattern Matching is a built in matching mechanism which matches data with first-match policy. Following is a simple snippet which will help you to understand this concept:

case class Person (name: String, gender: String, age: Int)

object example2 {
   def advance (xs: Person) = xs match {
      case Person (name, _, _)  => println(name)
      case _ => println(0)

   def main (args: Array[String]) {
      val person = new Person("Edward", "male", 25)

It actually helps us to disintegrate a given object, binding it to the values of what it is composed of. This idea is not unique to Scala, but is also there in Haskell, OCaml and Erlang. We can write the above piece of code only because of the existence of something called “Extractors”.  Its functionality is somewhat opposite to that of constructor. While constructor creates an object from the given list of parameters, Extractors extracts the parameters from the constructed object. You can relate this with the line:

case Person (name, _, _)  => println(name)

Now the question is how does this actually works?

To make a note Scala library already contains some predefined extractors. In the above example we made use of “case” class which automatically creates a companion object that contains apply and unapply methods. The apply method is used to create new instance of the class whereas the unapply method needs to be implemented by an object in order for it to get extracted.

Given below is an example where we define our own unapply method. However, there can be more than one possible signature for unapply method but we will define the one which is commonly used:

trait Person {
   def name: String
   def gender: String
   def age: Int

class SeniorPerson(val name: String, val gender: String, val age: Int) extends Person

class JuniorPerson(val name: String, val gender: String, val age: Int) extends Person

object SeniorPerson {
   def unapply(user: SeniorPerson): Option[(String,String,Int)] = Some((, user.gender, user.age))

object JuniorPerson {
   def unapply(user: JuniorPerson): Option[(String,String,Int)] = Some((, user.gender, user.age))

object example3 {
   def main(args: Array[String]) {
      val user: Person = new JuniorPerson("Edward","male",10)
      user match {
         case SeniorPerson(name,_,age) => println("My name is " + name + ". My age is " + age)
         case JuniorPerson(name,_,age) => println("My name is " + name + ". My age is " + age)
         case _ => println(0)

Hope this helps you to understand how the pattern matching actually works in Scala and helps you to implement your own extractors based on the usage. As for me it is one of the amazing alternative to IF statements!

By: Shweta Shaw

WWDC 2016 – a recap


With the recently concluded WWDC by Apple which was a week

long event held at San Francisco, Apple has brought together their ‘ecosystem’ closer than ever. For starters, Apple has renamed the OS X line of operating system to macOS to align with their naming of oPicture2perating system on other devices (namely iOS, watchOS, tvOS). To add to that, they have now brought Siri to the macOS which bridges the mobile and computer world further together. However, I would like to shift my focus to the major changes brought to the iPhone along with iOS 10 because of its relevance to Blibli.

  1. User Experience Changes.

Apple is known to be a company that priorities user experience more than anything,Picture3which is something that has revolutionised the industry. With the new iOS, Apple has shown how much it values user experience by cleaning up their user interface in a number of different places. Besides making camera easily accessible from the lock screen, and raise to wake feature for iPhone 6s and iPhone 6s Plus, Apple has redesigned notifications to allow you to do a lot more right from your lock screen. For example, you can use 3D touch to press down on a Calendar notification to Accept, Maybe or Decline on meeting invitations without leaving the lock screen, besides being able to see photos, videos and other rich messages without having to leave the lock screen.


  1. Siri

Besides making Siri available on the Mac as well, Apple has added a whole bunch of new functionality to establish Siri as the best Voice Assistant in the market. Siri now provides third party integration and support which opened up the voice assistant to developers with VoIP, messaging, ride booking, health, photo search and payment applications. You can now simply pick up your phone and tell Siri “Slack my manager than I’m going to be twenty minutes late” if it ever happens to you 😉

  1. Quicktype


Apple has now semi-integrated Siri’s intelligence with the keyboard suggestions to now provide amazing suggestions for responses. For example if you are talking to a friend and your friend asks you for someone’s number, Siri recognises this and offers up that
person’s contact as a suggestion. Furthermore, talking about a lunch or dinner plan allows you to make calendar entries (yes, Siri also checks if you are free at the time) right from the messages app so you do not have to leave your conversation and switch apps to do so.



Admittedly Apple’s messaging has not been able to garner the popularity of WhatsApp
and Facebook Messenger, but it has updated the app to bring it up to the standard and possibly pull up ahead of its competitors in features. First off, emojis can now be enlarged and messages can be ‘emojified’ (I had to put it first, it got the biggest applause in the keynote!). Apple has now opened up messages to third party apps allowing developers to come up with their own stickers that users can paste anywhere in their conversations. Furthermore you can now add effects to messages, so if its a celebratory message, the app behaves the way you’d want it to!

  1. Photos


Photos has gone through a few small but nevertheless cool changes. Photos now provides facial recognition and location detection capabilities. Besides, it also has a separate tab called ‘Memories’. Memories organises your photos and allows you to create beautiful videos with important pictures at the tap of a button. The next time you go for a trip to Nandi Hills with your friends in the weekend and take 500 pictures, all you have to do is tap a button and Apple will automatically create a video with the best photos for you to share with people you know!

Besides all of the changes I’ve discussed above, Apple has also opened up Maps to third parties allowing developers to make ride and restaurant booking apps for Maps without users having to navigate away from the app. Apple Music and Apple News have also gone through a much needed design overhaul to make it easier and cleaner for users. Apple further announced a new application called Home, which allows users to integrate their entire home system to allow users to home systems right from your phone. No more getting out of bed to turn off the lights at night!

While these were the key updates to the new iOS, the new operating system seems more like one that has brought Apple on par in terms of features with it’s competitors. One noteworthy exception is definitely Siri who is now more powerful than ever. With the introduction of Siri to the macOS line, one cannot help but notice how increasingly close Apple has brought the desktop and mobile devices. Apple’s decision to withhold Android support for iMessages was definitely the big miss of the keynote besides hopefuls yearning for Apple’s entry into the VR world. Despite the misses, developers around the world are glad Apple has now begun to rectify their mistakes and open up their apps to third party integration, and the future for Apple definitely looks brighter post the WWDC than before it.

By: Kunal Thacker

Java String Concatenation

Have you been told told many times, don’t use + operator to concatenate Strings? We know that it is not good for performance. How do you really know whether is it true or not? Do you know what is happening behind the hood? Why don’t we go ahead and explore all about String concatenation?

In the initial versions of java around JDK 1.2 every body used + to concatenate two String literals. Strings are immutable, i.e., a String cannot be modified. Then what happens when we write the following code snippet.

String message = "WE INNOVATE "; 
message = message + "DIGITAL";

In the above java code snippet for String concatenation, it looks like the String is modified but in reality it is not happening. Until JDK 1.4 the StringBuffer was used internally for concatenation and from JDK 1.5 StringBuilder is used to concatenate. After concatenation the resultant StringBuffer or StringBuilder is changed to String object.

You would have heard from java experts that, “don’t use + operator but use StringBuffer”. If + is going to use StringBuffer internally what big difference it is going to make in String concatenation using + operator? 

Look at the following example. I have used both + and StringBuffer as two different cases. 

  • Case 01, I am just using + operator to concatenate.
  • Case 02, I am changing the String to StringBuffer and then doing the concatenation. Then finally changing it back to String.

I have used a timer to record the time taken for an example of String concatenation.

package com.bhargav.utils;

 * @author nsrikantaiah
public class StringConcatenateExample {

  private static final int LOOP_COUNT = 50000;
  public static void main(final String args[]) {
    long startTime, endTime;
    startTime = System.currentTimeMillis();
    String message = "*";
    for(int i=1; i<=LOOP_COUNT; i++) {
      message = message + "*";
    endTime = System.currentTimeMillis() - startTime;
    System.out.println("Time taken to concatenate using + operator: " + 
endTime + " ms.");

    startTime = System.currentTimeMillis();
    StringBuilder sBuilder = new StringBuilder("*");
    for(int i=1; i<=LOOP_COUNT; i++) {
    endTime = System.currentTimeMillis() - startTime;
    System.out.println("Time taken to concatenate using StringBuilder: " + 
endTime + " ms.");

Look at the output (if you run this java program the result numbers might slightly vary based on your hardware/software configuration). The difference between the two cases is extremely surprising.

You might argue, if + operator is using StringBuffer internally for concatenation, then why is this huge difference in time? Let me explain, when a + operator is used for concatenation see how many steps are involved behind the scenes:

  1. A StringBuffer object is created.
  2. Message is copied to the newly created StringBuffer object.
  3. The “*” is appended to the StringBuffer (concatenation).
  4. The result is converted back to a String object.
  5. The message reference is made to point at that new String.
  6. The old String that message previously referenced is then made null.

Hope you understand the serious performance issues that can result if you use + operator for concatenation. Also, why it is important to use StringBuffer or StringBuilder (from java 1.5) to concatenate Strings.

And on a side note, the StringBuffer is slower compared to StringBuilder because it’s a thread safe object meaning all the methods are synchronised so you need to take a decision wisely on usage based on your requirement.

By: Nataraj Srikantaiah

How we automated settings on a real iPhone using Appium

Anyone associated with iOS automation on real apple devices would know the challenges it comes with. Appium is a leading automation framework, however one of the drawbacks of using Appium is that you can only launch those .ipa files on a real iPhone, which are signed with a development provisioning profile/cert, and not a distribution provisioning profile/cert which is used in the apps you download from app store. How do you get such signed native apps?

Recently we had to automate an app specific settings in the preference/settings of the iPhone. It was very important for the flow and we were completely lost. Luckily we stumbled across this small app by Budhash which used to launch the native safari on iOS. Then after some more research we found this answer in StackOverflow. It showed a way to launch settings from any app on iOS 8 onwards.

[[UIApplication sharedApplication] openURL:[NSURL URLWithString:UIApplicationOpenSettingsURLString]];

And that was it , we made some minor modifications on top of the SafariLauncher and got our own Settings Launcher. Since it was our own app we were able to sign and launch using appium and our app then launched settings which appium can control. The app is here. Feel free to use and contribute. Thanks!

By: Mr.Automator

Git Cherry Pick

Some of team members asked me how to merge only specific commits from a branch into the current branch. The reason you’d want to do this is to merge specific changes that you need immediately, leaving the other code changes you’re not interested.

First of all, use git log to see exactly which commit you want to pick or you can use the UI to identify the commit ID.


As an example:
Screen Shot 2016-06-21 at 3.30.34 pm


Let’s say you’ve written some code in the commit f69eb3 of the feature branch that is very important right now. It may contain a bug fix or the code that other people need to have access to it now. Reason might be anything, you want to have commit f69eb3 in the release branch, but not the other code you’ve written in the feature branch. Here the git cherry-pick comes very handy, in this case, f69eb3 is the cherry and you want to pick it.

Below are the step by step instructions to pick one commit from feature branch to release branch.

git checkout release
git cherry-pick f69eb3

That’s all, f69eb3 is now applied to the master branch and commited (as a new commit) in release branch. The cherry-pick behaves just like merge. If git can’t apply the changes then you will get merge conflicts. Git leaves you to resolve the conflicts manually and make the commit yourself.


In some cases picking one single commit is not enough. You may need, let’s say few consecutive commits. In this case, cherry-pick is not the right tool instead use rebase. From the previous example, you’d want commit 76f39a through b816a0 in release.


The process is to first create a new branch from feature at the last commit you want. Let’s say you want till b816a0.

git checkout -b mybranch b816a0


Next, you rebase the mybranch commit –onto master. The 76f39a^ indicates that you want to start from that specific commit.

 git rebase --onto master 76f39a


The result is that commits 76f39a through b816a0 are applied to master branch.

Please note, git commit ID is a hash of both its contents and its history. So, even if you have two commits that introduce the exact same change, if they point to different parent commits, they still have different IDs. After the cherry pick, the commit in the release branch will not reflect the same commit id as it will have new commit id.


By : Nataraj Srikantaiah

Building the Digital Supply Chain and Production Line

Let’s begin with a definition of DevOps coming from ITIL background.

“DevOps is just ITIL with 90% of stuff moved to ‘Standard Change’ because we automated the crap out of it” – TheOpsMgr

A more modern Definition and Scope of DevOps is covered in the CAMS model and is more wider than that but, is a good for those beginning their DevOps journey.


Returning back to what is defined as a Standard Change:

Standard Changes are pre-approved changes that are considered relatively low risk, are performed frequently, and follow a documented (and Change Management approved) process.Think standard, as in, ‘done according to the approved, standard processes.”

Let’s consolidate all of above and dive into our discussion of how DevOps is transforming your Digital Supply Chain and Production Line.

DevOps is changing the role of IT Operations, wherein they start to focus more on Digital Supply Chain and Release Pipeline rather than trying to inspect every single package in the pipeline. In this way Devops Engineers also become the Process Engineers, who design the pipeline in such a way that the desired outcome of the pipeline meets the objective, quality, risk, compliance and is consistent with the “Standard Change”.

The move towards converting you IT Operations into Automated Workflows and Infra-As-Code ensures that you are not skipping any essential component of your “desired system”.

Now, if we break the previously said definition of “Standard Change” into three main parts and correlate it to the DevOps world, we would get:

  • Relatively Low Risk – DevOps reduces the risks via automation, test-driven development (of application AND infrastructure code), rapid detection of issues via enhanced monitoring and robust rollback.
  • Increasing Task Frequency – this is a key tenet of DevOps. If it’s painful and you do it more often, learn to do it better (via automation/workflows)
  • Follow a documented Process– DevOps is about building a robust digital supply chain. This is your highly automated, end-to-end process for software development, testing, deployment and support and as part of that we are building in the necessary checks and balances required for compliance to change management processes. Instead of heaps of documents lying somewhere, convert your digital supply chain into automated workflows and infrastructure design to code.

A DevOps Digital Supply Chain will transform raw materials (source code) via continuous integration, test automation, packaging, release automation, infrastructure-as-code etc. into applications running in cloud-hosted environments.

So, just like a Physical Production line includes statistical sampling, automated testing etc., so will the Digital Supply Chain of the future. We already do this with TDD/BDD, automated testing with tools like Selenium etc. but it will become the DevOps job is to ensure that the digital production line delivers release packages of sufficient quality to ensure the stability of the application.


So, will the Operations Engineer of the future be “just managing (virtual) servers”?

No, almost certainly not.

What they will be doing is:

  • Designing and building complex digital supply chains with complex interdependencies both internally and externally to the organization, digital supply chains designed to meet the needs of applications that are designed to meet the needs of their customers, safely, securely and cost-effectively.
  • Designing, the approved process that says that all changes must pass automated testing in which they might periodically pick any one instance/ release in any of the environment and review it using the automation scripts (Chef/Puppet/Ansible, etc.) and ensure that a flag or template hadn’t been replaced or was outdated because no-one bothered to keep it up do date.
  • Similarly, designing the process that mandates “separation of duties” so that they could check and see the person who initiated the change (via the pipeline, using Jenkins or Rundeck), has the appropriate roles and is approved to do so.

The overall goal here is to move towards a Culture but keeping in mind the mantra of “Trust, but Verify” in order to ensure that the appropriate checks are applied and your systems are consistent and in balanced state.

By: Manik Dham