Java String Concatenation

Have you been told told many times, don’t use + operator to concatenate Strings? We know that it is not good for performance. How do you really know whether is it true or not? Do you know what is happening behind the hood? Why don’t we go ahead and explore all about String concatenation?

In the initial versions of java around JDK 1.2 every body used + to concatenate two String literals. Strings are immutable, i.e., a String cannot be modified. Then what happens when we write the following code snippet.

String message = "WE INNOVATE "; 
message = message + "DIGITAL";

In the above java code snippet for String concatenation, it looks like the String is modified but in reality it is not happening. Until JDK 1.4 the StringBuffer was used internally for concatenation and from JDK 1.5 StringBuilder is used to concatenate. After concatenation the resultant StringBuffer or StringBuilder is changed to String object.

You would have heard from java experts that, “don’t use + operator but use StringBuffer”. If + is going to use StringBuffer internally what big difference it is going to make in String concatenation using + operator? 

Look at the following example. I have used both + and StringBuffer as two different cases. 

  • Case 01, I am just using + operator to concatenate.
  • Case 02, I am changing the String to StringBuffer and then doing the concatenation. Then finally changing it back to String.

I have used a timer to record the time taken for an example of String concatenation.

package com.bhargav.utils;

/**
 * @author nsrikantaiah
 *
 */
public class StringConcatenateExample {

  private static final int LOOP_COUNT = 50000;
  
  public static void main(final String args[]) {
    
    long startTime, endTime;
    
    startTime = System.currentTimeMillis();
    String message = "*";
    for(int i=1; i<=LOOP_COUNT; i++) {
      message = message + "*";
    }
    endTime = System.currentTimeMillis() - startTime;
    System.out.println("Time taken to concatenate using + operator: " + 
endTime + " ms.");

    startTime = System.currentTimeMillis();
    StringBuilder sBuilder = new StringBuilder("*");
    for(int i=1; i<=LOOP_COUNT; i++) {
      sBuilder.append("*");
    }
    sBuilder.toString();
    endTime = System.currentTimeMillis() - startTime;
    System.out.println("Time taken to concatenate using StringBuilder: " + 
endTime + " ms.");
    
  }
  
}

Look at the output (if you run this java program the result numbers might slightly vary based on your hardware/software configuration). The difference between the two cases is extremely surprising.

You might argue, if + operator is using StringBuffer internally for concatenation, then why is this huge difference in time? Let me explain, when a + operator is used for concatenation see how many steps are involved behind the scenes:

  1. A StringBuffer object is created.
  2. Message is copied to the newly created StringBuffer object.
  3. The “*” is appended to the StringBuffer (concatenation).
  4. The result is converted back to a String object.
  5. The message reference is made to point at that new String.
  6. The old String that message previously referenced is then made null.

Hope you understand the serious performance issues that can result if you use + operator for concatenation. Also, why it is important to use StringBuffer or StringBuilder (from java 1.5) to concatenate Strings.

And on a side note, the StringBuffer is slower compared to StringBuilder because it’s a thread safe object meaning all the methods are synchronised so you need to take a decision wisely on usage based on your requirement.

By: Nataraj Srikantaiah

How we automated settings on a real iPhone using Appium

Anyone associated with iOS automation on real apple devices would know the challenges it comes with. Appium is a leading automation framework, however one of the drawbacks of using Appium is that you can only launch those .ipa files on a real iPhone, which are signed with a development provisioning profile/cert, and not a distribution provisioning profile/cert which is used in the apps you download from app store. How do you get such signed native apps?

Recently we had to automate an app specific settings in the preference/settings of the iPhone. It was very important for the flow and we were completely lost. Luckily we stumbled across this small app by Budhash which used to launch the native safari on iOS. Then after some more research we found this answer in StackOverflow. It showed a way to launch settings from any app on iOS 8 onwards.

[[UIApplication sharedApplication] openURL:[NSURL URLWithString:UIApplicationOpenSettingsURLString]];

And that was it , we made some minor modifications on top of the SafariLauncher and got our own Settings Launcher. Since it was our own app we were able to sign and launch using appium and our app then launched settings which appium can control. The app is here. Feel free to use and contribute. Thanks!

By: Mr.Automator

Git Cherry Pick

Some of team members asked me how to merge only specific commits from a branch into the current branch. The reason you’d want to do this is to merge specific changes that you need immediately, leaving the other code changes you’re not interested.

First of all, use git log to see exactly which commit you want to pick or you can use the UI to identify the commit ID.

As an example:
Screen Shot 2016-06-21 at 3.30.34 pm

Let’s say you’ve written some code in the commit f69eb3 of the feature branch that is very important right now. It may contain a bug fix or the code that other people need to have access to it now. Reason might be anything, you want to have commit f69eb3 in the release branch, but not the other code you’ve written in the feature branch. Here the git cherry-pick comes very handy, in this case, f69eb3 is the cherry and you want to pick it.

Below are the step by step instructions to pick one commit from feature branch to release branch.

git checkout release

git cherry-pick f69eb3

That’s all, f69eb3 is now applied to the master branch and commited (as a new commit) in release branch. The cherry-pick behaves just like merge. If git can’t apply the changes then you will get merge conflicts. Git leaves you to resolve the conflicts manually and make the commit yourself.

In some cases picking one single commit is not enough. You may need, let’s say few consecutive commits. In this case, cherry-pick is not the right tool instead use rebase. From the previous example, you’d want commit 76f39a through b816a0 in release.
The process is to first create a new branch from feature at the last commit you want. Let’s say you want till b816a0.

 

git checkout -b mybranch b816a0

 

Next, you rebase the mybranch commit –onto master. The 76f39a^ indicates that you want to start from that specific commit.

 

 git rebase --onto master 76f39a

 

The result is that commits 76f39a through b816a0 are applied to master branch.

Please note, git commit ID is a hash of both its contents and its history. So, even if you have two commits that introduce the exact same change, if they point to different parent commits, they still have different IDs. After the cherry pick, the commit in the release branch will not reflect the same commit id as it will have new commit id.

By : Nataraj Srikantaiah

Building the Digital Supply Chain and Production Line

Let’s begin with a definition of DevOps coming from ITIL background.

“DevOps is just ITIL with 90% of stuff moved to ‘Standard Change’ because we automated the crap out of it” – TheOpsMgr

A more modern Definition and Scope of DevOps is covered in the CAMS model and is more wider than that but, is a good for those beginning their DevOps journey.

insert1

Returning back to what is defined as a Standard Change:

Standard Changes are pre-approved changes that are considered relatively low risk, are performed frequently, and follow a documented (and Change Management approved) process.Think standard, as in, ‘done according to the approved, standard processes.”

Let’s consolidate all of above and dive into our discussion of how DevOps is transforming your Digital Supply Chain and Production Line.

DevOps is changing the role of IT Operations, wherein they start to focus more on Digital Supply Chain and Release Pipeline rather than trying to inspect every single package in the pipeline. In this way Devops Engineers also become the Process Engineers, who design the pipeline in such a way that the desired outcome of the pipeline meets the objective, quality, risk, compliance and is consistent with the “Standard Change”.

The move towards converting you IT Operations into Automated Workflows and Infra-As-Code ensures that you are not skipping any essential component of your “desired system”.

Now, if we break the previously said definition of “Standard Change” into three main parts and correlate it to the DevOps world, we would get:

  • Relatively Low Risk – DevOps reduces the risks via automation, test-driven development (of application AND infrastructure code), rapid detection of issues via enhanced monitoring and robust rollback.
  • Increasing Task Frequency – this is a key tenet of DevOps. If it’s painful and you do it more often, learn to do it better (via automation/workflows)
  • Follow a documented Process– DevOps is about building a robust digital supply chain. This is your highly automated, end-to-end process for software development, testing, deployment and support and as part of that we are building in the necessary checks and balances required for compliance to change management processes. Instead of heaps of documents lying somewhere, convert your digital supply chain into automated workflows and infrastructure design to code.

A DevOps Digital Supply Chain will transform raw materials (source code) via continuous integration, test automation, packaging, release automation, infrastructure-as-code etc. into applications running in cloud-hosted environments.

So, just like a Physical Production line includes statistical sampling, automated testing etc., so will the Digital Supply Chain of the future. We already do this with TDD/BDD, automated testing with tools like Selenium etc. but it will become the DevOps job is to ensure that the digital production line delivers release packages of sufficient quality to ensure the stability of the application.

insert2

So, will the Operations Engineer of the future be “just managing (virtual) servers”?

No, almost certainly not.

What they will be doing is:

  • Designing and building complex digital supply chains with complex interdependencies both internally and externally to the organization, digital supply chains designed to meet the needs of applications that are designed to meet the needs of their customers, safely, securely and cost-effectively.
  • Designing, the approved process that says that all changes must pass automated testing in which they might periodically pick any one instance/ release in any of the environment and review it using the automation scripts (Chef/Puppet/Ansible, etc.) and ensure that a flag or template hadn’t been replaced or was outdated because no-one bothered to keep it up do date.
  • Similarly, designing the process that mandates “separation of duties” so that they could check and see the person who initiated the change (via the pipeline, using Jenkins or Rundeck), has the appropriate roles and is approved to do so.

The overall goal here is to move towards a Culture but keeping in mind the mantra of “Trust, but Verify” in order to ensure that the appropriate checks are applied and your systems are consistent and in balanced state.

By: Manik Dham

Performance Testing in Agile World!

Performance Testing in Agile World!
In the era of hyper-connected world everybody want things to be done quick and perfectly. As Agile methodology came into existence to solve few of the time bound problems related to Software development life cycle, but there are many challenges when it comes to Performance testing in Agile World. In this article I have discussed about my research on fitting performance testing into Agile World and briefed about challenges and process we can follow to get good and fast results.

Goal:
The goal is to test the performance of the application or feature as soon as it is built and when it is ready for functional testing, so functional and Performance testing can be conducted in the same sprint. Benefits of testing early is tremendous as cost of fixing of performance bottlenecks is less.

Performance Testing in Agile
Below diagram depicts performance testing methodology for Agile (Scrum methodology is considered) which is a summary of my research on the topic.We can divide the whole performance testing efforts into three stages S1, S2 and S3, theses stages are tagged to each stages of Agile scrum.

Stage 1. Unit Performance Testing

This stage can be started as soon as the development activity in the sprint is started. This stage referred as Unit Performance testing which concentrates on method or code level testing.

Understand the technology and feature being developed (do not disturb)
Recommend on best performance practices of technology or web server container or app configuration (but do not insist)
Prepare Unit Performance test stories for required features
Once coding is completed, measure method level response time and resource usage
Method level performance analysis can be conducted using profilers
Use Test or Dev Environment for Performance Testing , since high load simulation is not required at this stage
Try Open source tools/profilers for performance testing (hence reducing the cost)
Measure and baseline the results and Always ask for feedback
Metrics:
Code Profile
Response time (end user and method level)
Stage 2. Focused/Component Performance Testing
Stage 2 referred as Focused or Component Performance Testing. This is not a Typical Performance Testing Category where Load testing is concentrated on a specific component (feature or functionality). Instead of waiting for complete code freezing of the application herein we can test the feature what is already built and design load model based on the feature to be tested.

Load on a specific feature or functionality
Run repeatedly after each sprint
Measure the Response Time of the new feature under normal load as designed
Can be conducted in Functional Test Environment (Low end configuration)
Do not waste time in enhancing scripts but concentrate on how to load functionality/feature
Metrics:
Response time of the feature
System Resource usages under the Load
Throughput and other performance metrics under load
Stage 3. Performance Integration Testing
This stage is a regular performance testing activity. These activities include planning for load , stress tests for whole application at system configuration level. Stage 3 can be planned during hardening sprint (or once 3 to 4 sprints of development is completed). Conduct performance testing for focusing on whole application.

Prepare workload model for entire system
Setup performance monitoring at each application tiers
Design and execute load , stress and endurance tests to discover bottlenecks at system level
Metrics:
System stability, capability and responsiveness
Advantages:
Involvement of performance team during the sprint will helps to build not only an application or a feature but a well performing application
Early bottleneck detection would reduce cost of fixing performance issues
working on performance fixes in hardening sprint will have great effect on system stability and capacity
“The bitterness of poor quality remains long after the sweetness of low price is forgotten.”— Benjamin Franklin

By: Irshad Ahmad Pallamajal

Minification of JS/CSS

Minification is a process of compressing the source code, without changing it’s functionality. This technique is useful sometimes, specially in the case of a web application with huge JavaScript/CSS files. In other cases, it’s probably just a micro-optimization. Minifying JavaScript/CSS files will reduce the bandwidth used and also improves the performance of the application as the page loading time will be minimized.

As per Wikipedia Source:

Minified source code is especially useful for interpreted languages deployed and transmitted on the Internet (such as JavaScript/StyleSheet), because it reduces the amount of data that needs to be transferred. Minified source code may also be used as a kind of obfuscation. In Perl culture, aiming at extremely minified source code is the purpose of the Perl golf game.

Minified source code is also very useful for HTML/CSS code. As an example, successive whitespace characters in HTML/CSS are rendered as a single space, so replacing all whitespace sequences with single spaces can considerably reduce the size of a page.

If you are using Maven for your build process, then you can use Minify Maven Plugin which is a wrapper above the YUI Compressor and Google Closure Compiler but has a layer of abstraction around these tools which allows for other tools to be added in the future.


<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-war-plugin</artifactId>
   <version>2.4</version>
   <configuration>
      <warSourceExcludes>/styles/*.css,/scripts/*.js</warSourceExcludes>
   </configuration>
</plugin>

<plugin>
   <groupId>com.samaxes.maven</groupId>
   <artifactId>minify-maven-plugin</artifactId>
   <version>1.7.2</version>
   <executions>
   <execution>
   <id>minify</id>
   <phase>process-resources</phase>
   <goals>
      <goal>minify</goal>
   </goals>
   <configuration>
      <charset>utf-8</charset>
      <jsEngine>CLOSURE</jsEngine>
      <skipMerge>true</skipMerge>
      <nosuffix>true</nosuffix>
      <cssSourceDir>styles</cssSourceDir>
      <cssTargetDir>styles/minified</cssTargetDir>
      <cssSourceIncludes>
         <cssSourceInclude>*.css</cssSourceInclude>
      </cssSourceIncludes>
      <jsSourceDir>scripts</jsSourceDir>
      <jsTargetDir>scripts/minified</jsTargetDir>
      <jsSourceIncludes>
         <jsSourceInclude>*.js</jsSourceInclude>
      </jsSourceIncludes>
   </configuration>
   </execution>
   </executions>
</plugin>

I created a sample web application with “scripts” folder to store javascript files and “styles” folder to store stylesheet files and used the above configuration to test the Minification process.

I copied the latest version of certain JavaScript and StyleSheet files weighed as noted below:

script/category-app.js –> 636 B

script/jquery.js –> 273199 B

styles/style-foundation.css –> 44659 B

After the Maven build, the minified versions of JavaSript and StyleSheet files weighed as noted below:

script/category-app.js –> 256 B

script/jquery.js –> 94689 B

styles/style-foundation.css –> 37016 B

The above results show that the Minification process is working pretty well as expected. So, if you are using a lot of JavaScript/StyleSheet files in your web application, consider minifying your js/css code, so that you can reduce the data transferred and also improve the performance of your web application. In some cases, it might be a micro-optimization

I have added the configuration <skipMerge>true</skipMerge>. Hence you see that all your files are minified but kept under the same file name within the minified folders respectively. If you set the value to <skipMerge>false</skipMerge>, then it merges all your JavaScript/StyleSheet files into one file.

If the configuration <nosuffix>true</nosuffix>, you see that all the file names still remain same as the original ones. If you set the value ot <nosuffix>false</nosuffix>, then it will suffix all your JavaScript/StyleSheet file names with ‘.min’.

For more detailed information, please visit here.

By : Nataraj Srikantaiah