Wednesday, February 20, 2019

Automated Unit Testing: MSTest vs XUnit vs NUnit


Automated Unit Testing: MSTest vs XUnit vs NUnit

Automated Unit testing can be run as often as you want, on as many kinds of data as you want and with next to no human involvement beyond once the tests are written. Not only that but using code to test code will often result in you noticing flaws with your program that would have been very difficult to spot from a programmer’s viewpoint.Once you have finalized for the unit test, the next question is to which test framework should you use?  The main contenders are Ms Test, NUnit and the newer kid on the block, xUnit.
Today we will take a look into a few popular C# unit testing frameworks and try them out first hand so you can choose which one best suits your project.
 Popular Automated Unit Testing Frameworks
  • Built-in Visual Studio testing tools
  • Ms Test
  • NUnit
  • XUnit 
All of these unit testing frameworks offer a similar end goal, to help make writing unit tests faster, simpler and easier! But there are still a few key differences between them. Some are more focused towards powerful complex tests, while others rank simplicity and usability as a higher priority. Let’s take a look :

Built-in Visual Studio Testing Tools

Microsoft unit test framework for C++. The Microsoft unit test framework for C++ is installed withVisual Studio and provides a framework for testing native code. Code coverage tools. You can determine the amount of product code that your unit tests exercise from one command in TestExplorer.
  1. Test Explorer. Test Explorer lets you run unit tests and view their results. Test Explorer can use any unit test framework, including a third-party framework, that has an adapter for the Explorer.
  2. Microsoft unit test framework for managed code. The Microsoft unit test framework for managed code is installed with Visual Studio and provides a framework for testing .NET code.
  3. Microsoft unit test framework for C++. The Microsoft unit test framework for C++ is installed with Visual Studio and provides a framework for testing native code.
  4. Code coverage tools. You can determine the amount of product code that your unit tests exercise from one command in Test Explorer.
  5. Microsoft Fakes isolation framework. The Microsoft Fakes isolation framework can create substitute classes and methods for production and system code that create dependencies in the code under test. By implementing the fake delegates for a function, you control the behavior and output of the dependency object.

MSTest

MSTest has been around since Visual Studio 2015, at least. When it first came out, didn’t have a way to pass parameters into your unit tests. For this reason, a lot of people opted to use NUnit instead. Since V2 MSTest also supports parameters, so the difference between the frameworks on a day-to-day basis has lessened a lot. A very basic test class using MSTest will look like this:
MSTest

XUnit

xUnit.net is a free, open source, community-focused unit testing tool for the .NET Framework. Written by the original inventor of NUnit v2, xUnit.net is the latest technology for unit testing C#, F#, VB.NET and other .NET languages. xUnit.net works with ReSharper, CodeRush, TestDriven.NET and Xamarin.
XUnit

NUnit

NUnit is an open-source unit testing framework for Microsoft .NET. It serves the same purpose as JUnit does in the Java world and is one of many programs in the xUnit family. Tests can be run from a console runner, within Visual Studio through a Test Adapter or through 3rd party runners. Tests can be run in parallel. It has Strong support for data driven tests. Unit supports multiple platforms including .NET Core, Xamarin Mobile, Compact Framework and Silverlight. In Nunit every test case can be added to one or more categories, to allow for selective running. The basic Nunit console looks as below:
NUnit
Below is the basic comparison of the 3 tools mentioned above.
Attributes
NUnit 3.xMSTest 15.xxUnit.net 2.xComments
[Test][TestMethod][Fact]Marks a test method.
[TestFixture][TestClass]n/axUnit.net does not require an attribute for a test class; it looks for all test methods in all public (exported) classes in the assembly.
Assert.That
Record.Exception
[ExpectedException]Assert.Throws
Record.Exception
xUnit.net has done away with the ExpectedException attribute in favor of Assert.Throws. See Note 1
[SetUp][TestInitialize]ConstructorWe believe that use of [SetUp] is generally bad. However, you can implement a parameterless constructor as a direct replacement.
[TearDown][TestCleanup]IDisposable.DisposeWe believe that use of [TearDown] is generally bad. However, you can implement IDisposable.Dispose as a direct replacement.
[OneTimeSetUp][ClassInitialize]IClassFixture<T>To get per-class fixture setup, implement IClassFixture<T> on your test class.
[OneTimeTearDown][ClassCleanup]IClassFixture<T>To get per-class fixture teardown, implement IClassFixture<T> on your test class.
n/an/aICollectionFixture<T>To get per-collection fixture setup and teardown, implement ICollectionFixture<T>on your test collection.
[Ignore("reason")][Ignore][Fact(Skip="reason")]Set the Skip parameter on the [Fact] attribute to temporarily skip a test.
[Property][TestProperty][Trait]Set arbitrary metadata on a test
[Theory][DataSource][Theory]
[XxxData]
Theory (data-driven test).

Conclusion

All frameworks will probably do 99% of the things that you need to test on a day-to-day basis. 3 or 4 years ago the lack of certain features in Ms Test made NUnit a better consideration. Today, that gap has narrowed so the choice between NUnit and MsTest is less. Historically unit testing maintenance has nightmares with the set-up code. Packages like Autofxiture will help alleviate those issues, but choosing a framework that forces the team to write more modular and flexible code feels like a better choice nowadays.

http://www.anarsolutions.com/automated-unit-testing-tools-comparison/utm_source=Blogger.com

Wednesday, February 13, 2019

How to Choose the Right DevOps Tools

How to Choose the Right DevOps Tools

The term DevOps stands for development and operations.  DevOps is a movement that focuses on collaboration between developers and operations, empathy for the customer, and infrastructure automation. In traditional models, developers write code and then hand it to operations to deploy and run in a production environment. This often leads to a lack of ownership between the two teams, as well as a slower pace of development, because agility clashes with risk management. In contrast, with a DevOps model, the two teams work together at each stage of software delivery toward common, customer-facing goals. Developers take ownership of their code, from code through production, and operations teams build tooling and processes that help developers leverage automation to build, test, and ship code faster and more reliably.
By breaking through walls in culture and processes, development happens more efficiently. And with the customer experience in mind from beginning to end, a DevOps approach ultimately results in a better product and happier, more empowered teams while delivering more value to customers and the business.
Following is the checklist to keep in mind while choosing the right DevOps tools.

devopsStep 1: Collaboration

The waterfall approach of planning out all the work and every single dependency for a release runs counter to DevOps. Instead, with an agile approach, you can reduce risk and improve visibility by building and testing software in smaller chunks and iterating on customer feedback. That’s why many teams, including our own, plan and release in sprints of around two to four weeks.
As your team is sharing ideas and planning at the start of each sprint, take into account feedback from the previous sprint’s retrospective so the team and your services are continuously improving. Tools can help you centralize learnings and ideas into a plan. To kick off the brainstorming process, you can identify your target personas, and map customer research data, feature requests, and bug reports to each of them. We like to use physical or digital sticky notes during this process to make it easier to group together common themes and construct user stories for the backlog. By mapping user stories back to specific personas, we can place more of an emphasis on the customer value delivered. After the ideation phase, we organize and prioritize the tasks within a tool that tracks project burndown and daily sprint progress.
Top tools we can use : Active Collab, Pivotal Tracker, VersionOne, Jira, Trello, StoriesOnBoard

Step 2: Use tools to capture every request

No changes should be implemented outside the DevOps process. All types of requests for any changes in the software or any new additions to the software should be captured by the DevOps. It provides automation to the system to accept requests for change that may inflow from both the sides either the business or the DevOps team. For instance, making changes to the software to facilitate a request to improve the access to the database.

Step 3: Usage of Agile Kanban project management

A primary advantage that Kanban has is that it encourages teams to focus on improving flow in the system.  As teams adopt Kanban, they become good at continuously delivering work they have completed.  Thus, Kanban facilitates doing incremental product releases with small chunks of new functionality or defect fixes.  This feature of Kanban makes it very suited to DevOps’ continuous delivery and deployment requirements.
The other big advantage of Kanban is how it enables you to visualize your entire value-stream and ensure stable flow. Thus, it helps you combine the workflows of different functions and activities right from Dev to Integration/ Build, Test, Deployment and beyond that to application monitoring. Initially, it will help your Dev and Ops people to work in a collaborative manner. Over a period of time, you can evolve into a single team and single workflow that includes all of Dev and Ops activities. Kanban provides you visibility to this entire process – and transformation to a DevOps culture.

Step 4: Usage of tools to Log Metrics

One should always opt for the tools that help in understanding the productivity of the DevOps processes both in automated and in manual processes. From this one can determine if it is working in the favor or not. First decide which metrics are more relevant to the DevOps processes, like speed to effective action vs. errors occurred. Secondly, use an automated process to rectify issues without the help of a human controller. For instance, dealing with problems related to scaling of software automatically on digital cloud podium.

Step 5: Implementation

Automated testing is just a minor part of automated testing. Test automation is an ability to implement code and data and the solution thus obtained to ensure its high quality. A continuous series of test is must do with DevOps.

Step 6: Acceptance tests

It is necessary to perform Acceptance tests as they help in a deployment of each part thereby acceptance of the infrastructure. The testing process should also define the degree of acceptance tests that are to be a part of apps, data and test suite. A good amount of time must be spent in testing and retesting of DevOps and define acceptance tests to ensure that the tests are in sync with the criteria selected. As applications evolve over time, new instructions are fed and again the testing should be done.
Automated Testing
Automated testing pays off over time by speeding up your development and testing cycles in the long run. And in a DevOps environment, it’s important for another reason: awareness.
To prepare for and support what Development builds, it’s important for Operations to have visibility into what is being tested, and how thoroughly. Unlike manual tests, automated tests are executed faithfully and with the same rigor every time. They also yield reports and trend graphs that help identify risky areas.
Risk is a fact of life in software, but you can’t mitigate what you can’t anticipate. Do your operations team a favor and let them peek under the hood with you. Look for tools that support wallboards and let everyone involved in the project comment on specific build or deployment results. Extra points for tools that make it easy to get Operations involved in blitz testing and exploratory testing.
Tools we can use: Bamboo, Bitbucket, Capture for Jira

Step 7: Continuous Feedback

Continuous feedback focuses on providing ongoing feedback and coaching by openly discussing an employee’s strengths and weaknesses on a regular basis. Feedback is of utmost importance to spot gaps and inefficiencies’ in the app.  Feedback loops are of great help for automated conversation among the tests. The right tool should be able to spot any issue using manual or automated mechanisms. A collaborative approach towards solving the problem should be adopted for achieving impeccable results.
Release dashboards
One of the most stressful parts of shipping software is getting all the change, test, and deployment information for an upcoming release into one place. The last thing anyone needs before a release is a long meeting to report on status. This is where release dashboards come in.
Look for tools with a single dashboard integrated with your code repository and deployment tools. Find something that gives you full visibility on branches, builds, pull requests, and deployment warnings in one place.

DevOps tools


The DevOps tool categories include the following:
  • Version control: it is a set a set of apps that track any type of changes made in a set of files in the due course of time. It tracks software both manually and automatically. AS compared to early version control system, the modern Version control uses distributed storage using either one master server (subversion) or a web of distributed servers (git or mercurial). Version control systems keep track of dependencies present in the version, for instance, type, brand, and database.
  • Building and deployment: It is a set of Tools that automate the building and deployment of software throughout the DevOps process, including continuous development and continuous integration.
  • Functional and non-functional testing: It is a set of tools that provide an automated testing in both the functional and nonfunctional aspects of a DevOps. A set of testing tools should provide an integrated unit, check performance updates, and security of the app. The sole motive of these testing is to check the whole automation system.
  • Provisioning: Config Management is part of provisioning. Basically, that’s using a tool like Chef, Puppet or Ansible to configure your server. “Provisioning” often implies it’s the first time you do it. Config management usually happens repeatedly. The tools that help in creating provisions podium required for deployment of the software and monitor the functions along with logging any changes that might occur to the configuration of the data or software. It helps in getting the system back in the state of equilibrium.

Must Have DevOps tools

Jenkins:  Jenkins a leader in DevOps tool for monitoring and implementation for repeated jobs. It allows DevOps teams to merge changes with ease and access outputs for a quick identification of problems.
Key Features:
  • Self-contained Java-based program ready to run out of the box with Windows, Mac OS X, and other Unix-like operating systems
  • Continuous integration and continuous delivery
  • Easily set up and configured via a web interface
  • Hundreds of plugins in the Update Center
Vagrant: Vagrant is another tool to help your organization transition to a DevOps culture. Vagrant also helps improve your entire workflow of using Puppet, improving development and process for both developers and operations
Key Features:
  • No complicated setup process; simply download and install within minutes on Mac OS X, Windows, or a popular distribution of Linux
  • Create a single file for projects describing the type of machine you want, the software you want to install, and how you want to access the machine, and then store the file with your project code
  • Use a single command, vagrant up, and watch as Vagrant puts together your complete development environment so that DevOps team members have identical development environments
Monit: It is a simple watchdog tool that ensures that the given process is running appropriately on the software. It is easy to setup and configure for multiservice architecture
Key Features:
  • Small open source utility for managing and monitoring Unix systems
  • Conducts automatic maintenance and repair
  • Executes meaningful causal actions in error situations
PagerDuty: A DevOps tool that helps in protection of brand reputation and customer experiences by gaining visibility into analytical systems and applications. It quickly detects and resolves incidents thereby delivering high performing apps and excellent customer experience.
Key Features:
  • Real-time alerts
  • Gain visibility into critical systems and applications
  • Quickly detect, triage, and resolve incidents from development through production
  • Full-stack visibility across dev and production environments
  • Event intelligence for actionable insights
Prometheus: Prometheus is a popular DevOps with teams that use Grafana as the framework. It is an open-source service monitoring system with flexible query language for slicing time series data for the generation of alerts, tables and graphs. It supports more than 10 languages and easy to execute custom libraries.
Key Features:
  • Flexible query language for slicing and dicing collected time series data to generate graphs, tables, and alerts
  • Stores time series in memory and on local disk with scaling achieved by functional sharing and federation
  • Supports more than 10 languages and includes easy-to-implement custom libraries
  • Alerts based on Prometheus’s flexible query language
  • Alert manager handles notifications and silencing

SolarWinds: It is a DevOps that offers real time correlation and remediation. Their Log & Event Manager software offers best troubleshooting, security solution and fixes, as well as data compliance.
Key Features:
  • Normalize logs to quickly identify security incidents and simplify troubleshooting
  • Out-Of-The-Box rules and reports for easily meeting industry compliance requirements
  • Node-based licensing
  • Real-time event correlation
  • Real-time remediation
  • File integrity monitoring
  • Licenses for 30 nodes to 2,500 nodes

Conclusion:

No wonder, selecting the right tools for DevOps is a difficult task along with the complexity of the new tools that are relatively new for most of the development shops. However, if one follows guidelines and Checklist, one should be easily able to sail through the DevOps creating a foolproof system. To learn more about DevOps best practices, check out our other posts on why DevOps can have a huge impact on the efficiency of your SDLC or contact us on info@anarsolutions.com
http://www.anarsolutions.com/how-to-choose-the-right-devops-tools/utm-source=Blogger.com 

Friday, February 8, 2019

Regression Testing: What is it and How to Use it

Regression Testing: What is it and How to Use it

It is a common practice that when a defect is fixed, two forms of testing are done on the fixed code. The first is confirmation testing to verify that the fix has fixed the defect and the second is regression testing to ensure that the fix hasn’t broken existing functionality.
Basically, ‘Regression Testing#regression #regressiontesting #testing #retest #Selenium #Winrunner #QTP #anarsolutions’, it is the process of re-testing already functional parts of a software project. Due to any changes, bug fixes or enhancements that are carried out, it is imperative to make sure that other fully working processes or modules in the software are left unharmed. It is a process that should be compulsory in any tester’s practice.
It is important to note that the same principle applies when a new feature or functionality is added to the existing application. In the case of new functionality being added, tests can verify that the new features work as per the requirement and design specifications while regression testing can show that the new code hasn’t broken any existing functionality.
Regression Testing

When to perform Regression Testing?

Regression testing should be performed after any change is made to the code base. Additionally, regression tests should also be executed anytime a previously discovered issue has been marked as fixed and must be verified.
Your team will need to decide the regression testing schedule that best meets your needs, but most organizations find it useful to perform regression testing on a strict schedule. This may be at the end of every work day, weekly, bi-weekly, or even after every single repository commit is pushed. The more often your team can perform regression testing, the more issues can be discovered and fixed, which will lead to a more stable and functional piece of software at the end of development and leading into production.

Types of Regression Tests

As these are repetitive tests, test cases can be automated so that set of test cases alone can be easily executed on a new build. Regression test cases need to be selected very carefully so that maximum functionality is covered in a minimum set of test cases. These set of test cases need continuous improvements for newly added functionality. It becomes very difficult when the application scope is very huge and there are continuous increments or patches to the system. In such cases, selective tests need to be executed to save testing cost and time. These selective test cases are picked based on the enhancements done to the system and the parts where it can affect the most.

What are common Regression Testing Techniques?

  • Unit Regression Testing – Immediately after coding changes are complete for a single unit, a tester – typically the developer responsible for the code – re-runs all previously passed unit tests.
  • Smoke Testing – Smoke testing, also called Build Verification Testing, is a series of high-priority regression tests that are executed after code changes are merged, and before any other testing.
  • Sanity Testing – Sanity testing is a subset of functional testing that examines only the changed modules. The goal of sanity testing is assurance that new features work as designed and that defects reported in prior testing have been resolved.
  • Complete Regression – Also known as “retest all” technique, all regression test cases are executed in a complete regression. While a complete regression may be tempting for assurance that the application has been thoroughly tested, this is by definition costly and is not always practical, especially for minor releases. In general, a full regression test suite may be necessary for major releases with significant code changes, following major configuration changes such as a port to a new platform, or to assure compatibility with an updated operating system
  • Partial Regression -As an alternative to a complete regression, a partial regression strategy selects only certain tests to be run. Tests may be selected based on the priority of the test case, or they may be chosen based on the particular module(s).

Regression Test of GUI Application:

It is difficult to perform GUI (Graphical User Interface) regression test when GUI structure is modified. The test cases written on old GUI either become obsolete or need to be modified. Re-using the regression testing test cases means GUI test cases are modified according to new GUI. But this task becomes a bulky one if you have a large set of GUI test cases.
Examples of Regression Testing tools are:
  • Selenium
  • Winrunner
  • QTP
  • AdventNet QEngine
  • Regression Tester
  • vTest
  • Watir
  • actiWate
  • Rational Functional Tester
  • SilkTest
  • TimeShiftX
Most of these tools are both functional and regression test tools. Adding and updating regression test cases in an automation test suite is a cumbersome task. While selecting automation tool for regression tests, you should check if the tool allows you to add or update the test cases easily. In most cases, we need to update automated regression test cases frequently due to frequent changes in the system.

Best Practices in Regression Testing

  • Maintain a schedule: Choose a schedule of testing you can maintain throughout the software development life cycle. This avoids cases where testing is placed on the back burner.
  • Use a test management tool: Properly track all tests being performed on a regular basis, and have historical records of their performance over time. Do this using a simple test management tool of some kind. This can lead to more efficient decisions about what test cases must be executed. It can help pinpoint necessary improvements and changes in test cases. It can also drive clearer analysis of the regression testing outcomes.
  • Evaluate test prioritization: Regression testing is more difficult when the application’s scope is large and there are continuous increments or patches to the system.
Thus, we can conclude that effective regression strategy, save organizations both time and money.
http://www.anarsolutions.com/regression-testing/utm-source=Blogger.com