Friday, May 25, 2018

Why, when and how to do Performance Testing?

Why, when and how to do Performance Testing?

Performance testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test
Why do Performance Testing?
Performance testing is done to provide stakeholders with information about their application regarding speed, stability and scalability. More importantly, performance testing uncovers what needs to be improved before the product goes to market. Without performance testing, software is likely to suffer from issues such as: running slow while several users use it simultaneously, inconsistencies across different operating systems and poor usability. Performance testing will determine whether or not their software meets speed, scalability and stability requirements under expected workloads.
Types of Performance
Testing:

Performance Test:
Performance test is used to determine speed, scalability, and/or stability.
Benefits of Performance test:
  1. Using the performance test we determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.
  1. Identifies mismatches between performance-related expectations and reality.
  2. Supports tuning, capacity planning, and optimization efforts.
Load Test : Load test is used to verify the application behaviour under the normal & peak load.
Benefits of Load Test :
  1. Determines the adequacy of a hardware environment.
  2. Detects functionality errors under load.
  3. Helps to determine how much load the hardware can handle before resource utilization limits are exceeded.
 Stress Test : To validate an application’s behavior when it is pushed beyond normal or peak load conditions.
Benefits of Stress test :
  1. Determines if data can be corrupted by overstressing the system.
  2. Ensures that security vulnerabilities are not opened up by stressful conditions.
  3. Helps to determine what kinds of failures are most valuable to plan for.
Capacity Test : Capacity test determine how many users and/or transactions a given system will support and still meet performance goals.
Benefits of Capacity test :
  1. Provides information about how workload can be handled to meet business requirements.
  2. Determines the current usage and capacity of the existing system to aid in capacity planning.
  3. Provides the usage and capacity trends of the existing system to aid in capacity planning
While doing Performance testing we measure some of the following:
Characterisitics (SLA)Measurement (units)
Response TimeSeconds
Hits per Second#Hits
ThroughputBytes Per Second
Transactions per Second (TPS)#Transactions of a Specific Business Process
Total TPS (TTPS)Total no.of Transactions
Connections per Second (CPS)#Connections/Sec
Pages Downloaded per Second (PDPS)#Pages/Sec

Performance Testing Example:
Banking Web Application
Largest bank provides Online Banking application to its customers to provide better services and minimizing the need to visit bank. The application provides awesome banking features to the customers. The clients connect to online banking application from any of the browser (Internet Explorer, Chrome, Firefox, Opera and any other).
Architecture:
The Bank IT Infrastructure consists of 4 Web Servers, 1 Application Server & 1 Database Server.
The banking web application currently supports 450 concurrent users at peak usage time including internal users which performs administration tasks. As per the requirement, Performance target was to have the application available for 24 hours and capable of handling 1500 concurrent users.
The motive of performance testing the application was to validate and make it scalable to handle 1500 concurrent users. The scenario was designed in such a way that all the users are active after 30 minutes. Performance Test execution started with ramp-up of users for all business transactions and monitoring was already started on servers to gather performance metrics.

                                   Business Process    Users
Check Balance300
Mini Account Statement600
Custom Account Statement (user provides date range not greater than 6 months)90
View Standing Instructions80
Make Bill Payment40
Enabling high security features10
Transfer Money between account350
Update Personal Information30

“Think time” and “pacing” is applied in every script to ensure that transactions throughput remains in limit during performance test execution.
http://www.anarsolutions.com/performance-testing/?utm_source=blogger.com

Wednesday, May 23, 2018

Mesh Apps and Service Architecture

Mesh Apps and Service Architecture

The  Mesh Apps and Service Architecture (MASA) refers to the design of solutions that link, mobile apps, web apps, desktop apps and IoT apps – and their data, whether user or operational – into a broad mesh of back-end services to create what users view as the “Application.”
What is Mesh Device?
The mesh device is a user friendly and states to an expanding set of endpoints. People use mesh device to access various applications and information. Or even to interact with different people, social communities, governments and businesses. The device mesh includes mobile devices, wearable, consumer and home electronic devices. Or in a simpler term the mix of devices you are using on your network. Often these mesh devices are referred to as sensors in the Internet of Things [IoT]. These IoT are connectivity expected and provided for encompassing beyond traditional mobile devices i.e.‘postmobile’ world.
What is Mesh App and Service Architecture?
The mesh device has several mesh apps running with service architecture. The service architecture gives back-end cloud scalability while the device mesh provides front end experience. The mesh device, apps and service architecture are flexible and compatible. They are suitable for the rapid enhancement as per user’s requirements. The facilitated software-defined application services allow Web-scale high performance, flexibility and quickness.
The important requirement for device, app and service architecture is supported by supportive IT architecture. There is linear application design known as the three-tier architecture. This has rigid models that are linear and old but have assurance for the future. Also, the mesh applications have presentation, processing and data facilities.
What is Micro-Service?
The mesh apps use containers that are for rapid, user-friendly development. As a result, there is micro-service architecture for assemble disseminated applications. The micro-service architecture is popular because it is easy to use. This micro-services technology is essential for building and distributing the applications. This is used for scalable deployment of applications both onsite and in the cloud. Recently, the modern method of development is use of containers. This is the most efficient way of building, developing and deploying.
Work of Application and Development Team:
The application team is forced to make new modern architectures. A very common cloud-based architecture is fast, flexible and active. Applications that run in the cloud give the user the experience of active, compliant experience. These are transparent and user-friendly mesh applications written for the mesh device.
The Savvy innovators are working on swiftness, built-in and scalable software applications. All the applications are made to support web-based performances onsite and in the cloud. The software developers know they require the client-server architecture. The challenge of their task is the rapidly changing demand in the digital mesh market. Also, there is another point the software developers have to consider.
All applications, processes and services have to work synchronously across architectures. Some are all cloud, on-premise architecture, and web-scale performance. There should be successful and competent mobile software made too. Finally, applications should be compatible across several sensors.
http://www.anarsolutions.com/mesh-apps-and-service-architecture/?utm_source=Blogger.com

Monday, May 21, 2018

Package Manager or Package Management System

Package Manager or Package Management System

A Package Manager is software that automates the process of maintaining software applications in a system. It is a collection of software tools that tidy’s up installing, removing, updating, and configuring computer programs on an operating system. These terms are often mentioned in assembly with UNIX and UNIX derived environments like Linux. Package Mangers are essentially used to remove manual interference from software maintenance. They work in close proximity with app stores, software and binary repositories. Package managers maintain a database of all the softwares, their dependencies on other softwares, version information, and metadata to avoid disparities and missing requirements.
Functions of a Package Manager
It is a file composed of one or more computer files along with metadata required for its deployment. The program contains lines of code that needs to be compiled and built. Each package metadata contains package description, its dependencies, and its version. They are assigned the job of the following,
  • Extracting package archives using files
  • Performing a thorough check on the authenticity and the security of the package by performing checks for digital certificates
  • Enabling and disabling operating system features
  • Install or remove several packages with a single string or command
  • Install hotfixes provided for the software
  • Add out of the box drivers to the system driver store
  • Grouping of softwares based on parameters like function, usage, etc.
  • Downloading latest versions and installing them
Maintenance of Configurations
Package Managers originated from file archiving systems primarily on UNIX, therefore they can only either overwrite or retain configuration files rather than making selection based changes. A set of rules cannot be applied to the configurations while software is updated. Kernel configurations are however exempted from this rule since any change in them can break down the system. If the format of the configuration file is changed, it can cause issues.
The issues may be because the old files cannot necessarily copy its function on the new file, in that, if there needs to be some options that should be disabled, the old files cannot perform the function. There are a few softwares that allow configuration during installation. Otherwise, we can install the software with the default configurations and change them once the software is installed.
Common Formats:
  • Universal Package Manager
  • Free and Open Source Package Managers
  • Application Level Package Managers
Commands in use:
Most package manager allows similar functions and features, making the commands majorly translatable. The general functions are installing, removing, updating, show updatable softwares, delete configurations or dependencies, etc.
http://www.anarsolutions.com/package-manager-or-package-management-system/?utm_sorce=Blogger.com

Wednesday, May 16, 2018

Introduction to Serverless Computing

Introduction to Serverless Computing

Serverless computing, the next big deal in cloud computing also refer to as the applications that significantly depend on third-party services knows as Backend as a Service (BaaS) or also known as Function as a Service (FaaS), is a code execution model in which the cloud
provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machine, per hour.
Serverless computing is when the server is hidden from the developers. The developers do not have to maintain the operation of these serverless. The developers do not have to think about the scalability, high-availability, infrastructure-security, and so on. There is no time required for spending on serverless computing. Hence, this time can be utilised for some other professional work.
Serverless computing is for developing micro-service-oriented solutions. According to this method, the complex build, development and deployment are simplified. Deployment is either onsite premises or in the cloud.
Few Insights on Serverless Computing:
  • Serverless computing is not any specific technology. It is the concepts that existed earlier. Certain new solutions have materialised such as severless OpenWhisk that is helpful to the developers.
  • The serverless architectures depend on third-party services. They are Backend as a Service [BaaS]. Also, on custom code that’s run in ephemeral containers function as a service [FaaS]. The host vendor is AWS Lambda.
  • Serverless was originally for applications for 3rd party services like cloud to manage server-side logic. Generally, these are client application having single page web app or mobile apps. Also, there are the cloud databases namely Parse and Firebase. There are the authentication services like Auth0, AWS Cognito, etc. These services are described as Mobile Backend Service.
  • Serverless computing allows application developer to program. However, the process is different. There are no traditional architectures. Severless has stateless compute containers that are event-triggered, transient and completely managed by third-party.
There is user interface or UI-driven applications. An ecommerce app is an excellent application of 3-tier client-oriented system with server-side logic. Traditionally, this architecture would look like an implemented Java script on the server side, a HTML with Javascript at the client end. The client may not know authentication, page navigation, searching, and transactions. However, with a Serverless architecture the program would be simplified. There is no architectural migration. The program is in UX structure.
There is an API Gateway for client and server function. Both the Message Broker and the FaaS environment – the two systems are closely tied to each other. Also there is an Ad server that interacts with the clients responds. The FaaS process checks the multiple copies of the function code written by original process.
FaaS is runs at the back end code without managing the server systems or the server applications. Server applications have containers and PaaS (Platform as a Service.) FaaS have significant architectural restrictions because it executes the processing. FaaS has important restrictions are for any given spell of a function. The in-process or host states that one can make available to any following spell. This includes status of RAM, status of local disk. Thus the deployment-unit is not visible but FaaS functions are stateless.
http://www.anarsolutions.com/introduction-to-serverless-computing/?utm_source=Blogger.com

Monday, May 14, 2018

The Rising Cost of Defects

The Rising Cost of Defects

The early detection of defects, in a process, is important 
for the successful execution of a project. However, the detection and prevention of defects is a significant challenge in the software industry.
Although the proverb “Better safe than sorry” may seem to be entirely disconnected from software programming, pay heed, it is the single mantra that can save a ton in maintenance cost for a project. Defects or rather bugs like we not so endearingly call them, are the sneakiest of them all. Irrespective of the severity of it, the later it is found the more it does you harm. Safe to say, the cost of defects is proportional to the time taken in finding them.
A large portion of the cost of software development consists of error removal and reworking on projects. The reworking process costs more than the initial process so early detection of defects during the design and requirements phase is necessary to avoid this extra expense. A large number of defects usually occur in the initial stages of a project and early defect detection will lower the overall cost of the project.
How Defects do arise?
When the error is found in the requirement or the design phase, it is easier to correct, but if the error is found in the user acceptance test it becomes expensive. The reason as to why this is the case is, even in the most trivial cases, the root cause analysis and regression testing take a lot of time later in the development cycle. In most cases, the defect found in later stages is not caused by a missing line of code or mistyping, it is more subtle and difficult to find.
It is highly unlikely that a defect caused due to minor errors will be overlooked by testers who test every scenario in detail. This means there is more extensive testing and detailed analysis that needs to be done to figure what went wrong. If the final product is an integration of multiple platforms or libraries, the analysis is ten folds more difficult. Even if it is fixed it still would require additional regression testing and integration testing to confirm the fix.

Most defects require the team to understand what caused the defect other than faulty code. Was it a fault in the process, does the process need improvement or was there any test case that was blocked till the last minute or was the test coverage not good enough or was there last minute changes to the code. These need to be a whole lot of brainstorming on the probable cause. This effort translates to additional cost of the defect.
Reason for Defects:
With the stringent timelines and agile process followed by companies, the numbers of defects are on the rise. Robust measures need to be taken, a lot of preventive mechanisms need to be in place, all-embracing code review needs to be done, more test cases needs to be written, and best practices needs to be followed. This will make sure the system is sturdier and the number of unforeseen bugs will be less translating into lower maintenance cost.
http://www.anarsolutions.com/rising-cost-defects/?utm_source=Blogger.com

Parallax Effect or Parallax Scrolling

Parallax Effect or Parallax Scrolling

Introduction to Parallax Effect
The effect when multi-layers foreground, middle and background move with varying speeds. As a result there is a sense of depth. Parallax effect has been used in animation [1920s] and video games [1980s]. this effect gave the viewer an illusionary visual experience. Parallax effect gave rise to Parallax scrolling known as layered motion. It was first used in web designing in 2011. Parallax scrolling has allowed shifting of focus, opacity, videos, animations, and multi-directional journeys. Thus the visitors to the website having parallax enjoy the enhanced experience.
What is Parallax Scrolling?
Parallax scrolling is a computer graphics/web design technique. The background images are slowly moved with camera over the foreground images. Hence there is an illusion in 2-D scene. This is possible using various methods.
Layer method uses multi-layers scrolled independently horizontally and vertically for composition over one another. This can be replicated with multi-plane camera. Layer method is for writing games. One layer position is exchangeable at varying amounts in same direction. Layer method needs displaying system supporting it.
Sprite method uses many hidden layers. The controls are separately used as moving objects. When displaying system supports a viewer can enjoy Star Force and others. The Amiga computer supports sprites and allows Risky Woods on sprites multiplexed Amiga.
Repeating pattern and animation method require display for separate titles as floating objects. These are used over the background. The animation of the tiles causes the coloring that can be used for animating the full screen. There is an illusion effect of a separate hardware layer. Several games use parallax effect such as the Parallax by Sensible Software.
Raster method is when image lines of pixels are usually created. They are also refreshed from top to bottom order with a short delay known as horizontal blanking interval. This method was used in video games and TV games. This method provides the illusion of several layers.
Disadvantages
Parallax can slow the browser in a major way. This occurs especially when the browser is out of date. Hence it is very important to know the visitors or audience from beforehand. These sites have a disadvantage in their performance. They are like the black hole. Browsers depend on the direction of the scrolling and the appearance of the fresh content. Generally, it functions best with minimal changes. A parallax site always has huge visual elements spread over the page that makes the browser to reload the page.
http://www.anarsolutions.com/parallax-effect-or-parallax-scrolling/?utm_source=Blogger.com

Wednesday, May 9, 2018

The Impact of Code Quality on Test Coverage Planning

Test coverage measures the amount of testing performed by a set of test. Wherever we can count things and can tell whether or not each of those things has been tested by some test, then we can measure coverage and is known as test coverage.
During the course of developing software, testing the software for its functionality and efficiency is a crucial step. Often many developers contemplate over the question i.e. the topic of this article. The Impact of Code Quality on Test Coverage Planning is debated and it often comes down to the answer to ‘How much is enough testing’? Software testing, a critical step in the development cycle, is part of the bigger picture known as Quality Control. Without testing in QC, there would be no proper validation of the software. The work put in software project planning and execution of the plan could be deemed useless if the testing is improper.
To determine the element of code quality, few tests have to be performed to test the metric of the application’s performance. These tests examine the performance based on different alternative paths that could be taken to deliver variants based on resources. To test the quality of code, there are several tests available that assess the code on certain parameters. Before venturing into the plans behind the type of test to consider for testing code, there are a few concepts to understand.
Test Coverage
Expressed in percentage, test coverage is one of the parameters considered for testing. It serves as a metric for the functionality of the code. It measures the number of software coverage items utilized by a test suite. It directly correlates to the software functionality. There are various aspects to measuring test coverage. You can test for branches, statements, or paths in a program.
Rework Factor
Rework factor is a direct measure of the quality of code. It is primarily a representative of the fractions of test samples that were reported to defect (lines of code that will require rework) in the current test cycle. Rework factor based on historical data can be calculated based on the current data to generate estimates. Interpretation of the rework factor incorporates inverse proportionality i.e. an increase in the rework factor indicates the lesser quality of the code as there are more defects that need rework.
Expected Testing and Expected Defects
In a given test cycle time, the total number of tests that can be planned and executed are referred to as Expected testing. These tests serve to the test coverage goals for a given module. On the other hand, the expected defects represent are an estimate of the number of defects to be reworked on in the next test cycle.
http://www.anarsolutions.com/impact-code-quality-test-coverage-planning/?utm_source=Blogger.com

Monday, May 7, 2018

HIPAA Changes 2017 – Omnibus Rule

HIPAA Changes 2017 – Omnibus Rule

Recent updates to the pre-existing HIPAA, the abbreviated version of The Health Insurance Portability and Accountability Act were announced earlier this year. The Act has been a part of the IT sector’s long history ever since its release in 2003. HIPAA started as a security rule in 2003 and primarily targeted to the health care professionals and the industry. The act aimed to bring about a paradigm shift in the security of health information, patient confidentiality, information integrity, and protection to all health information available online (ePHI). The HIPAA Omnibus Rule, debated this year, opened up new doors for HIPAA compliance initiatives. In this article, we look at the changes made to the act and report on their impact on the IT security and health care professionals.
Updated HIPAA Definitions for ePHI:
The availability of electronic protection health information is one of the forerunning objectives of the HIPAA act. The recent 2017 changes in HIPAA has updated definitions of common terms under ePHI.
According to the new definition, Encryption is an approach that involves the use of an algorithmic process to transform data into alternative forms that can only be accessed using a confidential decryption key. There are low chances of data compromise over encryption.
Accessing data/information is the process of reading, writing, modifying, overwriting, and communicating data from one party to another.
According to the act, a technical safeguard is a collective term for all kinds of technology, various policies, and protocol in place to ensure that there is controlled access to electronic protected health information.
A typical workstation could mean anything on the lines of a laptop or a personal computer or any other electronic device that can perform a set of core functions and is able to store and analyze data from a particular environment.
Data breaches and the HIPAA Omnibus Rule
Prior to the establishment of the HIPAA Omnibus Rule, the two scientific communities involved, the healthcare and the IT security, debated on the regulation and premises that would lead to a situation of a data breach or a breach of data. This would help both communities understand when and how they could report in situations such as a breach. While advocates argued that privacy is important and any disclosure could mean a breach in private patient data, others argued that there should be a more firm premise for claiming a breach in data. There should be a significant loss to the individual either financially or other means.
To conclude
The new Omnibus Rule affects IT professionals in different ways, even with the problems related to business and maintenance and sharing of electronic records of patients, the security issues of the changes in HIPAA are manageable and not detrimental. From a security point of view, even the updated ePHI policies on data breach classification are more structured and benefit both health care and IT security professionals.
References from : http://searchitchannel.techtarget
http://www.anarsolutions.com/hipaa-changes-2017/?utm_source=Blogger.com

Friday, May 4, 2018

Angular2 with ES6 : framework of choice

Angular2 with ES6 : framework of choice

Angular2 has been the framework of choice for many application and web developers. The latest on offer is Angular 2 with ES6 JavaScript. The ES6 JavaScript was an awaited development after previous associations of Angular2 with ES5 JavaScript, TypeScript, and Dart. Ever since the release of Angular2 in 2014, the latest iteration of the popular MV* framework developed by Google to build complicated web applications for the browser. The framework also provided them with the liberty to go beyond the scope of the minimum requirement. The various versions of the same MV* framework are available to the developers. The most popular of them is the MVC (Model-View-Controller), followed by other MVP (Model-View-Presenter), MVVM (Model-View-View Model), and the MVCVM (Model-View-Controller-View Model). Angular2 has been associated often with MVC type frameworks.
All about Angular2
Angular2 is all you need to create a strong, sturdy, and reliable front-end unit for your web application or mobile application. Often, regarded as a complicated task to complete, Angular 2 offers powerful tools to ensure that the entire process of building a frontend web application is easy and hassle-free. Of these tools, there are a few templates available to the user to aid in HTTP services, faster rendering of options, forms creation and handling services, and other important tools that help users in landing their ideal web application or mobile application.
TypeScript
Angular has been associated with JavaScript for a long time, the long list features ES5, ES6, soon t come ES7, Dart, Babel, TypeScript, AtScript, to name a few. Writing modern JavaScript, however, has been a matter of concern for newcomers that lack any experience with the latest versions of JavaScript. ES5 continues to be the most widely accepted version of JavaScript. However, as ES6 continues to gain universal acclamation including sharing compatibility with Angular2, it will soon be the standard for JavaScript. Most browsers currently support ES5 and latest developments suggest that the latest releases of the web browsers will support ES6. This is because of JavaScript being an assembly for browsers. All forms of code are transcribed down to the JavaScript that the browser is compatible with. Case in point, CoffeeScript, a popular higher-level language is a favorite among developers. If the user inputted in CoffeeScript, the compiler would then generate it into plain JavaScript, for example, ES6 for the browser to understand and process.
To conclude
Angular2 with ES6 would be a new take on the previously existing framework system. It represents a newer JavaScript and enhances the translation from high-level languages for web browser applications and mobile applications.
http://www.anarsolutions.com/angular2es6/?utm_Source=Blogger.com

Wednesday, May 2, 2018

How Does Client-Side Scripting Improve Web Application Performance?

How Does Client-Side Scripting Improve Web Application Performance?

If your web application is not performing well, either because of consistent lagging or because of longer loading times, you may want to consider shifting to Client-si
de scripting. Similar to JavaScript, you can add a Client-side script to your web page that the client can interact with through a web browser. This short code will enable the browser to reduce the load on your server. Essentially, the script eases the load of the web page allowing the user to run a web application smoothly. The code is active only from the client’s side and is particularly useful for creation of better responding applications.
Understanding web applications
Before divulging the details of client-side scripting, it is imperative to discuss the foundation of web applications. Web applications are different when compared to the regular applications that run on desktops. Essentially a desktop application communicates with a database server to import and export local data. All scripts involved in the code run on the computer’s local storage. However, in the case of web applications, there is no requisite that a web application has to be installed in the client’s local storage. An active internet connection established well through a web browser should be enough to run all kinds of web applications. Unlike desktop applications, the web application communicates with the servers through the web browser. For web applications, the client side is not very active; the read load is on the web server on the other side of the chain. This is where the transfer of bulk majority of data occurs.
The Need for Client-Side Scripting
As it is with most pieces of technology, continuous upgrades have made most of the web applications more advanced and versatile. The applications have come from a long way right from the basic scripts to advanced, more sophisticated, and more powerful. They now handle data and operate on the same level as small desktop applications. The data is handled using web servers that are solely responsible for acceptance and responding to incoming HTTP requests. As the web servers are under immense load from higher-end web applications, it could lead to slower responses of user-initiated HTTP requests.
Benefits of enabling client-side scripting
1. Faster responses to all user-initiated HTTP requests as the server has client-side scripting enabled to ease the workload on web application servers.
2. The application web server handles requests better and is subjected to lesser server issues
3. Ajax and jQuery option enables easier call to web-services.
http://www.anarsolutions.com/client-side-scripting/?utm_Source=Blogger.com