Wednesday, December 25, 2019

Features of ASP.net CORE 2.2

Features of ASP.net CORE 2.2

The main theme for ASP.NET Core release was to improve developer productivity and platform functionality with regards to building Web/HTTP APIs. Some of the important features of ASP.net CORE 2.2 include:

  • ASP.NET Core Web Project template now updated to Bootstrap 4 that gives a fresh look to the page. Most of extra elements are removed. Default UI also supports Bootstrap 4.
  • For SPA based templates now support Angular 6
  • Web API improvements is the big theme for this release.
  • HTTP/2 support is added.
  • IIS in-process hosting model is added for IIS.
  • Health checks framework is integrated now to monitor health of APIs and apps.
  • Endpoint routing is introduced and takes care of several routing problems.
  • ASP.NET Core SignalR Java client is added.
  • Better integration with popular Open API (Swagger) libraries including design-time checks with code analyzers – ASP.NET Core 2.1 introduced the ApiController attribute to denote a web API controller class which performs automatic model validation and automatically responds with a 400 error. In 2.2, the attribute is expanded to provide metadata for API Explorer and provide a better end-to-end API documentation experience for Swagger/OpenAPI definition.
    • Basically, it makes it possible for all MVC Core applications to have a good Swagger/OpenAPI definition by default. To achieve this, a set of analyzers are introduced to find cases where code doesn’t match the conventions.
  • Introduction of Endpoint Routing with up to 20% improved routing performance in MVC. In 2.2, a new routing system, called Dispatcher is introduced. This is designed to run the URL matching step very early in the pipeline so that the middleware can see the Endpoint that was selected as well as metadata that is associated with that endpoint.
  • Improved URL generation with the LinkGenerator class & support for route Parameter Transformers
  • New Health Checks API for application health monitoring
  • Up to 400% improved throughput on IIS due to in-process hosting support
  • Up to 15% improved MVC model validation performance
  • Problem Details (RFC 7807) support in MVC for detailed API error results
  • With ASP.NET Core 2.2, Microsoft offers an OpenID Connect based authorization server, which will allow your ASP.NET application to act as an authentication point for your projects, be they web site to the API, SPA to API, native application to an API or, for distributed applications API to API.
  • Preview of HTTP/2 server support in ASP.NET Core
  • Template updates for Bootstrap 4 and Angular 6
  • Java client for ASP.NET Core SignalR
  • Up to 60% improved HTTP Client performance on Linux and 20% on Windows
  • A new code generator tool to produce client side code (C# & TypeScript) for calling and using the WEB APIs.
  • HTTP/2 support in Kestrel & HttpClient.
  • Inbuilt plan for health check.
  • SignalR to support for Java or C++.
  • With ASP.NET Core 2.2, you should be able to run ASP.NET Core applications in-process in IIS, giving a significant performance boost.
ASP.net CORE 2.2
Common features in ASP.NET Core 2.2 WebApi: Mapping
The data mapping is a very likely need to transform objects into other objects (similar or not) as the application layers interact with each other (dto -> domain objects and vice versa).
There are different ways to proceed. It is possible to create services, create classic static classes or extension methods.
AutoMapper is convention-based object-object mapper.
AutoMapper uses a fluent configuration API to define an object-object mapping strategy. AutoMapper uses a convention-based matching algorithm to match up source to destination values. AutoMapper is geared towards model projection scenarios to flatten complex object models to DTOs and other simple objects, whose design is better suited for serialization, communication, messaging, or simply an anti-corruption layer between the domain and application layer.
The important thing is to inherit from Profile class. When the application starts, every classes that inherit
from a Profile class will be registered by AutoMapper and the mapping strategy will be enabled.
Common features in ASP.NET Core 2.2 WebApi: Cachin
Response caching reduces the number of requests a client or proxy makes to a web server. Response caching also reduces the amount of work the web server performs to generate a response. Response caching is controlled by headers that specify how you want client, proxy, and middleware to cache responses.
The ResponseCache attribute participates in setting response caching headers, which clients may honor when caching responses. Response Caching Middleware can be used to cache responses on the server. The middleware can use ResponseCacheAttribute properties to influence server-side caching behavior.
Some cases represents a cache stored in the memory of the web server and works natively with ASP.NET Core dependency injection.
The attribute can be used to avoid any cache as well.
We can use cache with differents manners.
HTTP-based response caching
The HTTP 1.1 Caching specification describes how Internet caches should behave. The primary HTTP header used for caching is Cache-Control, which is used to specify cache directives. The directives control caching behavior as requests make their way from clients to servers and as responses make their way from servers back to clients. Requests and responses move through proxy servers, and proxy servers must also conform to the HTTP 1.1 Caching specification.
Keep in mind caching can significantly improve the performance and scalability of an ASP.NET Core app.
Common features in ASP.NET Core 2.2 WebApi: Profiling
It is quite natural to wonder about the performance of a newly developed Web API.
There are for example tools like Application Insights in Visual Studio and Azure which allow to monitor your applications, there is another Stackify Prefix tool which is free, and which allows to trace Http requests.

Benefits Of Custom/ Bespoke Software Solutions

Software is traditionally divided into two categories: packaged or custom. Custom software, also known as “Bespoke” software, is a type of application developed specially to suit a business or an organization to fulfill its specific business-centric requirements. Customized software is generally designed for a single client, or a group of clients (customers) who decide what kind of functionality and processes the software should possess.
Through the years, companies have learned how packaged software products have fallen short when it comes to meeting diverse buyer needs. This has prompted greater demand for custom software products designed to meet specific needs of each organization. With these accommodations, exclusiveness in the solutions of business issues is guaranteed.
Benefits of custom-building software

Benefits Of Custom Bespoke Software SolutionsSimplicity

Bespoke software is designed specifically around your business, so you don’t have to worry about unwanted features or unfamiliar terminology that is not appropriate to your business. This reduces learning time and is obviously more efficient to use.

Personalization

When it comes to business operations and software development, there is no such thing as a one-size-fits-all solution. Although there is a lot of high-quality ready-made software available, there’s a pretty slim probability that it will fit seamlessly into your organization. Modifying some built-in features of your packaged software might meet a small need but it can’t compare to a software solution that has been built from scratch specifically for your business.
Custom software development ensures that your software will be able to handle all your daily processes. Personalized software can be as complex or intuitive as you like, and this makes it the ideal choice for businesses of all sizes. When you have software that is designed to do exactly what you want it to do, it helps make your day easier.
What works best for one business doesn’t necessary work for another. You may be following certain processes which others don’t. Software that is developed exclusively for you ensures that all of your activities and processes are properly addressed to and automated exactly as per your requirements.

Branding and Identity

With unique tools, such as forms and auto messages in your brand voice, you will stand out from the crowd. Sometimes even a simple detail can be a key differentiator between you and the competition.

Efficient Workflow

You don’t have to mold your working to suit a particular software – Your software can be changed easily to suit your requirements as and when required provided it is custom made. Bespoke software is typically developed based on an iterative, Agile process to ensure constant alignment with business needs. As the finished application will have been matched to your specific working practices, it is both faster and easier to use.

Complete Development Ownership

When it comes to changes or improvements, you are firmly in the driver’s seat. There is no pressure to upgrade just because there is a later version – with new features and changes you may not want or need. This also means you won’t have problems such as new versions not being backwards compatible with old data or the screen layout changing and incurring a new learning curve. It’s your software, and your decision.

Security

A major concern for many B2B and B2C companies, data access and security concerns affect many end-users in the market today. It’s often a prime target for hackers because it’s easier to attack multiple companies all running the same programs.
Because off-the-shelf solutions are widely used, they’re more susceptible to external threats than custom built applications. With a custom solution, you’re the only one running your program
People transacting online want to ensure their transactions are safe and secure at all times. Supporting expensive security protocols can make you pass on added costs to the services you offer to your customers. This can make you lose your competitive edge in the market. Moreover, the flow of data within internal processes of the organization also needs to be regulated by implementing strict security standards. With customized software development, you have the power to decide which data-security technology or protocol is ideally suited for your business and integrate that in your software.

Potential Marketability

In some scenarios you may want to white label and sell a software product that you have played (and paid!) a part in developing – even though it was originally designed for internal use. A bespoke solution for you could become an off-the-shelf solution for other businesses.

Expansion of existing tools

Often a business or department will use documents or spreadsheets to log data. This may work for an individual or a small team, but often there is huge benefit to transforming repetitive tasks into automated, tailored multi-user tools that store data in a central repository. This would not only save time for the individual but helps with transparency, data validation, access and reporting across the business.

Cost effectiveness

With customized development, you can plan and phase the development process. You’re not required to invest a huge sum of money first on to reap the benefits of automation. Based upon you budget and funds availability, you can start automating individual process flows in an organized and timed manner over time to make development affordable through affordable software development services.
The takeaway for organizations and businesses is even though you’re required to spend some time to define your exact automation needs and wait while your software is developed, it’s worthwhile to opt for customized software development since you can benefit from an automation process that is tailor made to suit your unique needs and business-centric requirements.

Integration

Another advantage to using custom applications is the ability to integrate with your current software.
Businesses often require multiple programs to run optimally, and adding software that isn’t made to fit with those programs – or needs additional third-party support to integrate properly – will only add to the bulk.
Additionally, templated software usually requires you to adjust your processes, not the other way around.
Bespoke software can generally be customised to integrate smoothly with any other key software used within the business – and introducing new applications doesn’t mean the previous integrations will stop working.
Custom applications are built to be compatible with the software you already have installed. In fact, it can even automate processes that weren’t previously integrated in your setup, simplifying the process further.
It can also help in cases where technology is developing faster than you can keep up with. Instead of dropping thousands of dollars to replace your whole system, custom software can be used to update the areas you need while still being compatible with the rest of your system.
At the end of the day, new applications should make your life easier, not harder. Custom built software can do that for you.

Invention

Since the software is totally customized, you have the option to decide what kind of software development technology to use to design your own app. You have the power to decide and opt for trend-setting disruptive technologies to design you customized app and make it work the way you want it to.
Business operations can be complex and every organization has different needs and issues. Custom software plays a key part in your organization’s growth and efficiency. It The numerous advantages of custom software development include integration, personalization, scalability, support, and security.and cost effectiveness. It’s better to create tailor-made products than to go off-the-shelf.
Every organization has different needs. Most of them realize that off-the shelf software will fall short of their expectations and is not going to make their dreams come true. By using custom software, companies can turn their ideas into reality and get advantage over their rivals who are still stuck with off-the shelf solutions.
At AnAr we make sure, custom built software will give you an advantage over your competitors who are stuck running off-the-shelf solutions that simply can’t meet all of their needs. It is our passion. We build custom software to fit your unique needs and improve your business processes.
Regardless of the industry you’re in or the size of your company, you can always rely on AnAr for successfully implementing your requirements.
http://www.anarsolutions.com/benefits-custom-bespoke/?utm-source=Blogger.com

Tuesday, December 24, 2019

Performance Optimization Guidelines For MS SQL Database

Performance Optimization Guidelines For MS SQL Database

Performance Optimization Guidelines For MS SQL Database

Speed is an important factor for the usability of a website. To retain its users, any application or website must run fast. The overall performance of any app is largely dependent on database performance. Database optimization involves maximizing the speed and efficiency with which data is retrieved.
This article will cover performance improvement guidelines for MS SQL server database.
Indexes:
Indexing is an effective way to tune your SQL database that is often neglected during development. In basic terms, an index is a data structure that improves the speed of data retrieval operations on a database table by providing rapid random lookups and efficient access of ordered records. Setting up indexes usually doesn’t require much coding, but it does take a bit of thought.
Consider creating indexes on columns frequently used in the joins, WHERE, ORDER BY, and GROUP BY clauses. Such columns are the best candidates for indexes against which you are required search particular records.
Create Clustered Indexes: Since a table can have only one clustered index, you should choose the columns for this index very carefully. Analyze all your queries, choose most frequently used queries and include into the clustered index only those columns which provide the most performance benefits from your creation.
Create Non-Clustered Indexes: You should consider non-clustered index creation carefully because each index can take up disk space and has impact on data modification.
Rebuild Indexes Periodically: While you update, delete and create records in your tables, your indexes becomes fragmented and performance may degrade over time. You should consider rebuilding indexes periodically in order to keep performance at the best level.
Use Covering Indexes: A covering index is an index that includes all the columns referenced in the query. Covering indexes can improve performance because all the data for the query is contained within the index itself and only the index pages not the data pages will be used to retrieve the data.
Filtered Index: A Filtered Index is nothing but a Non Clustered Indexes along with Where Clause. Because of this where clause, indexing will be done on portion of records
SQL
Data Archiving: It is a process of removing selected data records from  operational database that are not expected to be referenced again and storing them in an archive data store where they can be retrieved if needed. There is no built-in command or tool for archiving databases.
Table Partitioning: It is a way to divide a large table into smaller, more manageable parts without having to create separate tables for each part. Data in a partitioned table is physically stored in groups of rows called partitions, which makes it easier to fetch records and requires fewer table scans and as a result the response time is also less and memory utilization is less. Data in a partitioned table is partitioned based on a single column, the partition column, often called the ‘partition key’. It is important to select a partition column that is almost always used as a filter in queries.
When the partition column is used as a filter in queries, SQL Server can access only the relevant partitions.
Let’s take example of Invoices table containing data from every day in 2015, 2016, 2017 .If we partition the table on date column and make separate partition for each year then all rows with dates before or in 2015 are placed in the first partition, all rows with dates in 2016 are placed in the second partition, all rows with dates in 2017 are placed in the third partition.
SQL
While executing above query, only second partition will be scanned.
Evaluate Query Execution Plan: Understanding the actual plan that runs is the first step toward optimizing a query. Its main function is to graphically display the data retrieval methods chosen by the SQL Server query optimizer. In SQL server management studio, you’ll see two options related to query plans –
  • Display Estimated Execution Plan (Ctrl + L)
  • Include Actual Execution Plan (Ctrl + M)
Enable the Display Execution Plan option, and run your query against a meaningful data load to see the plan that is created by the optimizer.
Evaluate this plan and then identify any good indexes that the optimizer could use. Also, identify the part of your query that takes the longest time to run and that might be better optimized.
You might see a detected missing index in execution plan. To create it, just right click in the execution plan and choose the “Missing Index Details…”.
SQL
Query Optimization Guidelines:
  1. Use parameters in queries
    The SQL Server query optimizer keeps recently used query plans in memory. When you are not using parameters, the parameters themselves contribute to make queries different from each other, and therefore, the Query Optimizer will not reuse them. Using parameters, the number of query plans in memory will decrease and they will more likely be reused.
  2. Retrieve only the data you need
    Sometimes you may be tempted to use SELECT * FROM … when writing your queries, this way you will retrieve all fields in a table when you only need some. In order to reduce the size of transferred data you should specify the list of just the columns you need.
  3. Prefix schema names
    Always prefix object names (i.e. table name, stored procedure name, etc.) with its schema name.
    Reason: If schema name is not provided, SQL Server’s engine tries to find it in all schemas until the object finds it.
               Inefficient  Select * from Employee
               Efficient – Select EmployeeId, Name, Salary  from dbo.Employee
  1. Use Locking Hints to minimize locking
    Within transactions, use the “WITH NOLOCK” option when possible. The NOLOCK table hint allows you to instruct the query optimizer to read a given table without obtaining an exclusive or shared lock
  2. Choose the smallest data type that works for each column
    Explicit and implicit conversions may be costly in terms of the time that it takes to perform the conversion itself. Unicode data types like nchar and nvarchar take twice as much storage space compared to ASCII data types like char and varchar.
  3. Limit the use of cursors
    Cursors can result in some performance degradation. We can avoid cursors by
    Wise use of joins 2.While loop 3.User defined functions
  4. Avoid Correlated SQL Subqueries
    Correlated subquery is one which uses values from the parent query. This kind of SQL query tends to run row-by-row, once for each row returned by the outer query, and thus decreases SQL query performance     Inefficient-SQL  Efficient-SQL
  5. Use EXISTS() instead of count while checking existence of record                           This SQL optimization technique concerns the use of EXISTS(). If you want to check if a record exists, use EXISTS() instead of COUNT(). While COUNT() scans the entire table, counting up all entries matching your condition, EXISTS() will exit as soon as it sees the result it needs.
  1. Keep database administrator tasks in mind
    Do not forget to take database administrator tasks into account when you think about performance. For example, consider the impact that database backups, statistic updates, index rebuilds have on your systems. Administrator should keep an eye on disk space, connection pool, size of backup and log files. Include these operations in your testing and performance analysis.
    SQL server hardware should not be shared with other services
Conclusion: The guidelines discussed here are for basic performance tuning. If we follow these steps, we may get good improvement on performance. To do advanced SQL server performance tuning we would need to dig much deeper into each of the steps covered here.
http://www.anarsolutions.com/performance-ms-sql-database/?utm-source=Blogger.com

Thursday, December 12, 2019

What is the difference between Libraries and frameworks?

What is the difference between Libraries and frameworks?

Libraries and frameworks are one of the intense competitiveness on front-end development.

In these days, the meaning of FE dev is about what libraries or frameworks is using.
Every year new different projects appeared with their own features, but now we can roughly agree that Angular, React or Vue.js are the pioneers in this world.
Many of us will be unaware of this difference which is important to understand during development. The possible answer to this question, if asked, will be “Framework is a collection of various libraries”. However, this definition is not entirely true. “Who calls whom” i.e. the caller/callee relationship defines the difference between the two terms. It is our code which calls the library code while in framework, it is framework’s code which calls our code.
Libraries and frameworks are one of the intense competitiveness on front-end development.
The simple answer to this one is you call a library, but a framework calls you. Let’s explain that by analogy, also touching briefly on what it means from a programming perspective.
Let’s see how.
library is essentially a set of functions that you can call, these days usually organized into classes. Each call does some work and returns control to the client.
Library: Is an entire toolkit which highly abstracts different layers, like browsers / DOM models / etc. Also as a good toolkit, it offers a lot of tools and neat stuff to work with, which in general, simplify your coding experience. This is a generic set of tools that aid you in various tasks, without necessarily addressing the same problem.
A library is just a collection of class definitions. The reason behind is simply code reuse, i.e. get the code that has already been written by other developers. The classes and methods normally define specific operations in a domain specific area. For example, there are some libraries of mathematics which can let developer just call the function without redo the implementation of how an algorithm works
Sample JavaScript Library : jQuery, MooTools, Prototype etc.,
framework embodies some abstract design, with more behavior built in. In order to use it you need to insert your behavior into various places in the framework either by sub classing or by plugging in your own classes. The framework’s code then calls your code at these points
Basically, it describes a given structure of “how” you should present your code. Pretty much like a code-template, along some helpers, constructors etc. to solve/simplify a specific problem or bring your architecture in “order”. Simply this imposes structure upon your code in order to address a particular problem.
In framework, all the control flow is already there, and there’s a bunch of predefined white spots that you should fill out with your code. A framework is normally more complex. It defines a skeleton where the application defines its own features to fill out the skeleton. In this way, your code will be called by the framework when appropriately. The benefit is that developers do not need to worry about if a design is good or not, but just about implementing domain specific functions.
Sample JavaScript Frameworks: Angular ,React JS etc.,

Library and Framework

Difference between Library and Framework

1.       Meaning

In programming, library is a collection of reusable functions – meaning the resources you can reuse – used by computer programs. The resources, sometimes called as modules, are generally stored in object format. Most programming languages have their own standard libraries, but programmers can also create their own custom libraries. In simple terms, a library is a set of functions that you can call, whereas a framework is a piece of code that dictates the architecture of your project. In a way, frameworks and programming languages are intertwined that together aid in computer programs.

2.       Inversion of Control

The “Inversion of Control” is the key difference which separates a framework from a library. A library is a set of functions and routines used by other programs and you are in full control if it when you call a method from a library. However, the control is inverted in case of a framework. It dictates the structure of your project and the code never calls into a framework, instead, it calls you. Simply put, you can simply think of library as a function of an application and a framework as the skeleton of the application in which the application defines its own features.

3.       Function

Libraries are a set of functions that can be used anywhere meaning it simply is a piece of code written by other developers which can be reused. They are incorporated seamlessly into existing projects to add functionality that you can access using an API. They are mostly used for frequently used modules because you don’t have to explicitly link them to every program that uses them. They are important in program linking and binding process. Frameworks, on the other hand, provide a standard way to build and deploy applications and can be mostly used when starting a new project rather than integrated into existing ones.

4.       Example

To better understand the difference between a library and a framework, let’s take a look at jQuery and AngularJS. jQuery is a cross-platform JavaScript library that simplifies DOM manipulation along with a lot of other complicated things such as CSS manipulation, HTML event methods, AJAX calls etc. The purpose of jQuery is to simplify use of JavaScript on your website. AngularJS, on the other hand, is a structural framework based on the MVC architecture used for creating dynamic web applications. It’s entirely based on HTML and JavaScript and unlike jQuery, it cannot be integrated into existing projects because as a framework, it dictates how your code is to be structured and run.
http://www.anarsolutions.com/difference-between-libraries-frameworks/?utm_source=Blogger

Wednesday, December 11, 2019

Top 8 Architectural Principles for Designing Modern Web Applications

Top 8 Architectural Principles for Designing Modern Web Applications

The key objective of a software architect is to minimize the complexity of enterprise software system by segregating the design into various areas of concern.
The creation of a well-designed system architecture of large web applications poses immense challenges in the software development process. System architecture design decision can significantly influence the system scalability and maintainability. The success lies in creation of an architectural framework that is responsive to the architectural challenges of these web applications.
A comprehensive understanding of the architectural design principles will help business to manage many challenges ahead faced at the time of implementation.
Ideally these principles will lead businesses toward creating applications out of isolated components that are not closely interconnected to other parts of the application, but rather interact through explicit interfaces or messaging systems.
ARCHITECTURAL PRINCIPLES
Separation of Concerns
A key principle of software design, this principle encompasses creation of a system architecture with layered components each addressing a separate concern. The  software architecture design created using this principle can be  maintained easily ,is less tightly coupled and far less likely to break the Don’t Repeat Yourself principle.
Architecture designed by deploying this principle keeps business logic and rules in discreet location while infrastructure and user interface reside in a separate project. This ensures that iterations in software design do not affect the core business model at the same time the model can be tested easily for its efficacy.
Encapsulation
Encapsulation is defined as the casing up of data inside a single discreet unit. This principle holds together the code and the data it manages. In other words, encapsulation protects the data from being accessed by entities outside the unit.
Software architects utilize this key principle of encapsulation or Object-Oriented Programming using a programming language like Java which lends their code security, flexibility and easy maintainability. This allows the developer to design constructs (objects, functions and classes) that can be declared as a public interface where clients can interact without the internal implementation being tampered or the dependant client code getting affected. Most software developers are comfortable using encapsulation to hide (instance variables) of a class from an illegal direct access.
This hidden data can be accessed only through any member function of its own class in which they are declared. This is popularly known as data hiding.Encapsulation provides a great deal of flexibility to enterprise systems. The entire business process can be overhauled without significant changes in the delivery process. Encapsulation enables OO experts to design agile systems. Systems that are open to change as your business undergoes transformation. A single module of the entire system can undergo changes independently without any impact on any other module of the system.
Single Responsibility
This is one of the most widely applied principles by developers to build robust, scalable and easy to maintain applications. This can be applied not just to classes or microservices but also to software components. Its simplicity is its key feature making the application easier to implement and open to changes in the future.
One of the key considerations is that application requirements change over a period and if your class/ software component has been assigned multiple responsibilities, the more frequently you may need to change it and they cease to be independent of each other.
Don’t Repeat Yourself
Changes effected in one class may necessitate updates or recompilation of the dependant classes. Depending on your change, you might need to update the dependencies or recompile the dependent classes even though they are not directly affected by your change
Consequently, you may have to need update your class more often each change getting more complex. Hence, the single responsibility principle ensures that each class is assigned only one responsibility. Classes, software components and microservices that have only one responsibility are easy to maintain, test or debug and lead to faster deployment.
The Don’t Repeat Yourself (DRY) principle states that duplication in logic should be eliminated via abstraction; duplication in process should be eliminated via automation.
Duplication is Waste
Adding additional, unnecessary code to a codebase increases the amount of work required to extend and maintain the software in the future.  Duplicate code adds to technical debt.  Whether the duplication stems from Copy Paste Programming or poor understanding of how to apply abstraction, it decreases the quality of the code.  Duplication in process is also waste if it can be automated.  Manual testing, manual build and integration processes, etc. should all be eliminated whenever possible through the use of automation.
Dependency Inversion Principle 
This principle of architecture design deals with High Level Modules of the application which generally involve complex logic. This means they are easily reusable and are unaffected by the changes in low level modules which basically deal with the utility features of the application. An abstraction needs to be introduced to that can decouple the high-level and low -level modules from each other.
An introduction of a simple interface abstraction between the higher-level and lower-level software components eliminates the dependencies between them. The Dependency Inversion Principle enables architects to modify the higher level as well as lower-level components without affecting any other classes if the interface abstractions remain the same. In other words, the principle states that “both the components should depend on abstractions.”
However, it must be noted that the Dependency Inversion Principle is not something that can be used to resolve dependencies instead it enables developers to design an architecture that allows to test various modules of the application in isolation.
Explicit Dependencies Principle:
Strong inter-module dependencies are regarded as an indicator of poor software design. Tightly coupled systems, in which modules have excessive dependencies, are difficult to work with as different modules cannot be studied easily in isolation, and revisions or extensions to functionality cannot be added. Identifying architectural dependencies proactively in the development life cycle along with metrics leads to better communication of architecture quality.
If a component or class relies on other components to accomplish its operations, then the other components are known as the dependencies for this class. Classes or components can have both implicit or explicit dependencies.
This principle of architectural design explicitly declares class- level dependencies at the time of class construction. Most of the time as explicit dependency is an interface which can be exchanged with other implementations at any point in the design life cycle whether in production or during testing or debugging. This principle makes the architecture design loosely coupled, easier to test and accepting of change or enhancement.
Persistence Ignorance
The principle of Persistence Ignorance (PI) holds that classes modelling the business domain in a software application should not be impacted by how they might be persisted. Thus, their design should reflect as closely as possible the ideal design needed to solve the business problem at hand and should not be tainted by concerns related to how the objects’ state is saved and later retrieved. Some common violations of Persistence Ignorance include domain objects that must inherit from a particular base class, or which must expose certain properties. Sometimes, the persistence knowledge takes the form of attributes that must be applied to the class, or support for only certain types of collections or property visibility levels. There are degrees of persistence ignorance, with the highest degree being described as Plain Old CLR Objects (POCOs) in .NET, and Plain Old Java Objects (POJOs) in the Java world.
Bounded Context
Bounded Context is a central pattern in Domain-Driven Design. It is the focus of DDD’s strategic design section which is all about dealing with large models and teams. DDD deals with large models by dividing them into different Bounded Contexts and being explicit about their interrelationships.http://www.anarsolutions.com/top-8-architectural-principles-for-designing-modern-web-applications/?utm-source=Blogger