Performance Testing in Agile World!

Performance Testing in Agile World!
In the era of hyper-connected world everybody want things to be done quick and perfectly. As Agile methodology came into existence to solve few of the time bound problems related to Software development life cycle, but there are many challenges when it comes to Performance testing in Agile World. In this article I have discussed about my research on fitting performance testing into Agile World and briefed about challenges and process we can follow to get good and fast results.

Goal:
The goal is to test the performance of the application or feature as soon as it is built and when it is ready for functional testing, so functional and Performance testing can be conducted in the same sprint. Benefits of testing early is tremendous as cost of fixing of performance bottlenecks is less.

Performance Testing in Agile
Below diagram depicts performance testing methodology for Agile (Scrum methodology is considered) which is a summary of my research on the topic.We can divide the whole performance testing efforts into three stages S1, S2 and S3, theses stages are tagged to each stages of Agile scrum.

Stage 1. Unit Performance Testing

This stage can be started as soon as the development activity in the sprint is started. This stage referred as Unit Performance testing which concentrates on method or code level testing.

Understand the technology and feature being developed (do not disturb)
Recommend on best performance practices of technology or web server container or app configuration (but do not insist)
Prepare Unit Performance test stories for required features
Once coding is completed, measure method level response time and resource usage
Method level performance analysis can be conducted using profilers
Use Test or Dev Environment for Performance Testing , since high load simulation is not required at this stage
Try Open source tools/profilers for performance testing (hence reducing the cost)
Measure and baseline the results and Always ask for feedback
Metrics:
Code Profile
Response time (end user and method level)
Stage 2. Focused/Component Performance Testing
Stage 2 referred as Focused or Component Performance Testing. This is not a Typical Performance Testing Category where Load testing is concentrated on a specific component (feature or functionality). Instead of waiting for complete code freezing of the application herein we can test the feature what is already built and design load model based on the feature to be tested.

Load on a specific feature or functionality
Run repeatedly after each sprint
Measure the Response Time of the new feature under normal load as designed
Can be conducted in Functional Test Environment (Low end configuration)
Do not waste time in enhancing scripts but concentrate on how to load functionality/feature
Metrics:
Response time of the feature
System Resource usages under the Load
Throughput and other performance metrics under load
Stage 3. Performance Integration Testing
This stage is a regular performance testing activity. These activities include planning for load , stress tests for whole application at system configuration level. Stage 3 can be planned during hardening sprint (or once 3 to 4 sprints of development is completed). Conduct performance testing for focusing on whole application.

Prepare workload model for entire system
Setup performance monitoring at each application tiers
Design and execute load , stress and endurance tests to discover bottlenecks at system level
Metrics:
System stability, capability and responsiveness
Advantages:
Involvement of performance team during the sprint will helps to build not only an application or a feature but a well performing application
Early bottleneck detection would reduce cost of fixing performance issues
working on performance fixes in hardening sprint will have great effect on system stability and capacity
“The bitterness of poor quality remains long after the sweetness of low price is forgotten.”— Benjamin Franklin

By: Irshad Ahmad Pallamajal

Minification of JS/CSS

Minification is a process of compressing the source code, without changing it’s functionality. This technique is useful sometimes, specially in the case of a web application with huge JavaScript/CSS files. In other cases, it’s probably just a micro-optimization. Minifying JavaScript/CSS files will reduce the bandwidth used and also improves the performance of the application as the page loading time will be minimized.

As per Wikipedia Source:

Minified source code is especially useful for interpreted languages deployed and transmitted on the Internet (such as JavaScript/StyleSheet), because it reduces the amount of data that needs to be transferred. Minified source code may also be used as a kind of obfuscation. In Perl culture, aiming at extremely minified source code is the purpose of the Perl golf game.

Minified source code is also very useful for HTML/CSS code. As an example, successive whitespace characters in HTML/CSS are rendered as a single space, so replacing all whitespace sequences with single spaces can considerably reduce the size of a page.

If you are using Maven for your build process, then you can use Minify Maven Plugin which is a wrapper above the YUI Compressor and Google Closure Compiler but has a layer of abstraction around these tools which allows for other tools to be added in the future.


<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-war-plugin</artifactId>
   <version>2.4</version>
   <configuration>
      <warSourceExcludes>/styles/*.css,/scripts/*.js</warSourceExcludes>
   </configuration>
</plugin>

<plugin>
   <groupId>com.samaxes.maven</groupId>
   <artifactId>minify-maven-plugin</artifactId>
   <version>1.7.2</version>
   <executions>
   <execution>
   <id>minify</id>
   <phase>process-resources</phase>
   <goals>
      <goal>minify</goal>
   </goals>
   <configuration>
      <charset>utf-8</charset>
      <jsEngine>CLOSURE</jsEngine>
      <skipMerge>true</skipMerge>
      <nosuffix>true</nosuffix>
      <cssSourceDir>styles</cssSourceDir>
      <cssTargetDir>styles/minified</cssTargetDir>
      <cssSourceIncludes>
         <cssSourceInclude>*.css</cssSourceInclude>
      </cssSourceIncludes>
      <jsSourceDir>scripts</jsSourceDir>
      <jsTargetDir>scripts/minified</jsTargetDir>
      <jsSourceIncludes>
         <jsSourceInclude>*.js</jsSourceInclude>
      </jsSourceIncludes>
   </configuration>
   </execution>
   </executions>
</plugin>

I created a sample web application with “scripts” folder to store javascript files and “styles” folder to store stylesheet files and used the above configuration to test the Minification process.

I copied the latest version of certain JavaScript and StyleSheet files weighed as noted below:

script/category-app.js –> 636 B

script/jquery.js –> 273199 B

styles/style-foundation.css –> 44659 B

After the Maven build, the minified versions of JavaSript and StyleSheet files weighed as noted below:

script/category-app.js –> 256 B

script/jquery.js –> 94689 B

styles/style-foundation.css –> 37016 B

The above results show that the Minification process is working pretty well as expected. So, if you are using a lot of JavaScript/StyleSheet files in your web application, consider minifying your js/css code, so that you can reduce the data transferred and also improve the performance of your web application. In some cases, it might be a micro-optimization

I have added the configuration <skipMerge>true</skipMerge>. Hence you see that all your files are minified but kept under the same file name within the minified folders respectively. If you set the value to <skipMerge>false</skipMerge>, then it merges all your JavaScript/StyleSheet files into one file.

If the configuration <nosuffix>true</nosuffix>, you see that all the file names still remain same as the original ones. If you set the value ot <nosuffix>false</nosuffix>, then it will suffix all your JavaScript/StyleSheet file names with ‘.min’.

For more detailed information, please visit here.

By : Nataraj Srikantaiah

New Era Microservices Architecture

Microservice architecture, or simply microservices, is a distinctive method of developing software systems that has grown in popularity in recent years. In fact, for many developers it has become a preferred way of creating enterprise applications.

Microservices is a software architecture style in which complex applications are composed of small, independent processes that communicates with each other using web services. These services are small, highly decoupled and focus on doing a small task perfectly. The philosophy of microservices architecture essentially equals the Unix philosophy of “Do one thing and do it well”

While there is no standard, formal definition of microservices, there are certain characteristics that help us identify the style.  Essentially, microservice architecture is a method of developing software applications as a suite of independently deployable, small, modular services in which each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal.

Thanks to its scalability, this architectural method is considered particularly ideal when you have to enable support for a range of platforms and devices spanning web, mobile, IOT and wearables—or simply when you’re not sure what kind of devices you’ll need to support in an increasingly cloudy future all over.

How the services communicate with each other depends on your application’s requirements, but many developers use HTTP/REST with JSON or XML for communication between the micro services as a standard protocol.

To begin to understand microservices architecture, it helps to consider it’s opposite i.e. the monolithic architectural style. Unlike microservices, a monolithic application is always built as a single, autonomous unit. In a client-server model, the server-side application is a monolith that handles the HTTP requests, executes logic, and retrieves/updates the data in the underlying database. The problem with a monolithic architecture, though, is that all change cycles usually end up being tied to one another. A modification made to a small section of an application might require building and deploying an entirely new version. If you need to scale specific functions of an application, you may have to scale the entire application instead of just the desired components. This is where microservices can come to the rescue.

Examples of Microservices:

Netflix, eBay, Twitter, PayPal, Gilt, Bluemix, Soundcloud, The Guardian, and many other large-scale websites and applications have all evolved from monolithic to microservices architecture.

Pros

  • Microservice architecture gives developers the freedom to independently develop and deploy services.
  • A microservice can be developed by a fairly small team.
  • Code for different services can be written in different languages.
  • Easy to understand and modify for developers, thus can help a new team member become productive quickly.
  • When change is required in a certain part of the application, only the related service can be modified and redeployed—no need to modify and redeploy the entire application.
  • Better fault isolation: if one microservice fails, the other will continue to work (although one problematic area of a monolith application can jeopardize the entire system).
  • Easy to scale and integrate with third-party services.

Cons

  • Due to distributed deployment, testing can become complicated and tedious.
  • Increasing number of services can result in information barriers.
  • The architecture brings additional complexity as the developers have to mitigate fault tolerance, network latency, and deal with a variety of message formats as well as load balancing.
  • Being a distributed system, it can result in duplication of effort.
  • When number of services increases, integration and managing whole products can become complicated.
  • Handling use cases that span more than one service without using distributed transactions is not only tough but also requires communication and cooperation between different teams.
  • The architecture usually results in increased memory consumption.

HIGH LEVEL MICROSERVICES ARCHITECTURE

HIGH LEVEL MICROSERVICES ARCHITECTURE

By : Nataraj Srikantaiah

Need for Speed: Practices to Improve Software Quality

Software quality is always a fascinating and daunting task to achieve. Right from top management to test engineer talk about quality goals to be achieve. And you will still find defects in your application.

There is reason bugs called “Bugs” – these pests have been around for a long time, they always turn up at the most inappropriate times. This way achieving quality gets trickier in this brutal business climate – a scarcity of time and resources as well intense cost pressures – have made “the need for speed” a more apt motto for development teams rather than assuring that “quality is job No. 1.” This doesn’t mean that we can afford to dilute quality and accept software with Bugs.

We need to understand that quality assurance is not final part of application delivery, rather it must be part of whole SDLC. Below are few practices, development teams should follow. These will not only help team to improve quality but also not to slow down development process.

  1. Develop own Quality standards: Keeping in view of your end business goals/ KPIs, define your quality standards. Make sure you take in to account your time, resource and budget.
  2. Fine tune Goals to include Quality: Once your High level goals (at release level or Higher) are established: start translating them to teams and individuals. It will have 2 distinct advantages : A. You can closely monitor your progress towards goals by distributing among teams. Later respective Team Leaders will have responsibility to track the goals.  B. It will not only give a clear goal to your team members to achieve but also motivate them to achieve them as they can relate themselves with Business Goals.
  3. Establish Requirements Right: For the success of any project, Requirements are foundational stones. Requirements should be aimed for satisfying user experience. The benefit of this will result in less rework, less retesting and reduction in overall efforts.
  4. Test what you required, not everything: Once your requirements and Goals/KPIs are set, start identifying crucial and most risky areas, make sure testing for such areas will get lion’s share, so bugs that slip through are from less important areas. Also, discuss with development team for impact areas based on code changes.
  5. Define Simple Quality Metrics: For defining quality metrics, start with simple. It could be Defects per module, Open Defect Distribution etc. Once few basic metrics are ready, start discussing with Business people about which are the areas they would look for improvements and want to measure. In this case you can also propose few ideas from your side
  6. Optimize the use of Testing Tools: As per your budget and other constraints include Automation and Issue Tracking tools to help testing team.

Automation will free resources from repetitive testing efforts and tracking issues with any standard issue tracking tool will give important data points to make decisions.

Remember: Software quality is team exercise and everyone has to do it.

By : Manish Holey

Connected world, Cloud and Analytics

When we talk about connected things, lot of development is going on across all industry segments. We witnessed quite a few product launch announcements last year in this area. Still I feel there are a lot of challenges for its implementation, which includes remote connectivity, device management, network protocol standards, energy consumption, privacy/security and many others. Maybe this is the only reason  why we are not witnessing large number of connected devices in our day to day life .Though the talks of IoT has been around us  for more than a few years now, but that is not the case for industry usage of IoT. Industry is investing heavily in this and many implementations are already in production helping real time operations, optimizing cost and resource utilization. Please check out this video this video for further details on how Microsoft Azure IoT helps industry.

The evolution of public cloud will help to boost connected devices and its applications. It will not solve basic problem of Internet availability to things but will definitely solve the problem of connectivity and will help to process data easily. End to end solution for IoT applications with Amazon Web Services (AWS) have been implemented before the release of AWS IoT service launch. Here, architecture differences between before and after AWS IoT launch will be discussed to provide some more insights on how to leverage this new service for applications that covers data mining and analytics field.

Before AWS IoT service:

Below is the architecture in which sensor nodes connects to AWS Kinesis and send sensor data.

Screen Shot 2015-12-07 at 3.58.46 pm

Conclusion: After this we have multiple options to read data from AWS Kinesis stream. We can use Apache Storm for real time streaming analytics. Sample of Kinesis Storm spout is available here. To display real time data on dashboard, Kibana was used and for Elasticsearch reads Kinesis stream and processed data is used by Kibana. But as AWS keeps updating its services with new features it now provides Amazon Elasticsearch service out of the box. For more detail please check out this blog by Jeff Barr.

We can also use AWS Elastic MapReduce and process Kinesis stream with some MapReduce task. Storing data to DynamoDB or to other services is also possible.

After AWS IoT service:

There are many fixes needed in first part of above architecture. For example we have to manually manage all the things that are connected to network. Also to send data to specific AWS service (Kinesis in our case), AWS api keys with specific role needs to be present inside things/device. For all of that AWS IoT provides excellent solution. With that we can manage things/devices with all the features of AWS IAM which also includes certificate provisioning for things/devices. We can also revoke certificate associated with any node at any time.

Below is the architecture after using AWS IoT service:

Screen Shot 2015-12-07 at 4.01.49 pm

With Rules Engine of AWS IoT we can route messages to different AWS services. It also provides much needed support for MQTT protocol. Some of the noticeable feature of this service includes Device Shadow and Device sdks. The remaining part of the architecture will remain same for some application of data analytics and visualization which includes Storm, Elasticsearch and other methods related to that. But with the AWS IoT now can also talk to devices which enable us to design wide number of real time applications.

The ultimate goal will be to use historic data generated and find some pattern out of it that will drive some key decisions.

Conclusion:

With reduced hardware cost and availability of excellent cloud services there is immense opportunity in various applications ranging from factory automation, healthcare, logistic & warehouse management, device/things remote monitoring to home automation.

By : Pushparaj Zala

Value of Disruptive Innovation

Digital innovation has disrupted many a markets. The rate of change defined by mean time between changes has increased exponentially. The internet has brought goods, services and productivity of a global scale within reach of the world as we know it. The only limiting factors remain regulations and the ability to fulfill (in case of physical fulfillment cycles) orders placed.

Examples of Amazon disrupting retail markets, Priceline and expedia disrupting traditional travel agencies, WhatsApp disrupting telecoms, Uber disrupting the taxi business and AirBnB disrupting the hotel reservations business abound. These business models are rapid in gestation and instantaneously disruptive at a global level. Digital technology remains the backbone of such disruptions supported by the ubiquitous spread of the internet.

If you compare the value of disruption with market based values of listed entities, the largest tech firms in US and the other IT superpower, India depict a telling story. Apple is a known innovator and disrupter of the highest caliber while the largest IT outsourcer TCS provides services to global corporates. TCS disrupted the technology services about 2 decades ago and Apple began its integrated device ecosystem story a similar timeframe back. At the aggregate level comparing revenue of the 2 entities (as market cap carries growth rate, industry and other premium valuation factors)

TCS Apple
Revenue $18 billion $232 billion
Employees 335000 66000
Revenue per employee $53,731 $3,515,152
Productivity factor 65

Can one argue that an Apple employee is 65 times more productive than a TCS employee or is  this more an innovation premium yardstick?

1

The  ellipsis below show the valuation modulation across innovation types. Without getting into arguments of frothy and unsubstantiated valuations, we see someone willing to pay 8x of EBIDTA for traditional valuations, 28x of revenue for Digital platforms hosted in the cloud based valuations and 3 times Gross merchandise Value (GMV) for dominant eCommerce retail valuations.

GMV is a new norm and is a super set of cost of goods sold since marketplace connection models do not even get the goods on an online retailers books – but are considered in valuation schematics.

In short for a $10M EBITDA on a $100M revenue base, could value a tradition firm at $80M, a new age firm at $280M and a eCommerce firm with a GMV of $1B at $3B.

We will discuss the new normal on return characteristics and the devaluation of the profit focus in the next segment.

By : Deepak Nachnani