best practices Archives - Indium https://www.indiumsoftware.com/blog/tag/best-practices/ Make Technology Work Fri, 07 Jun 2024 12:58:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.indiumsoftware.com/wp-content/uploads/2023/10/cropped-logo_fixed-32x32.png best practices Archives - Indium https://www.indiumsoftware.com/blog/tag/best-practices/ 32 32 Dynamics upgrade Testing – Best Practices and Strategies for conducting Regression https://www.indiumsoftware.com/blog/dynamics-upgrade-testing-best-practices/ Mon, 14 Aug 2023 09:53:19 +0000 https://www.indiumsoftware.com/?p=20213 Upgrading to a new version of Dynamics can be a significant undertaking, and it’s crucial to ensure that everything works as expected after the upgrade. Testing is an essential part of the upgrade process, and it should be performed thoroughly to avoid any issues that could affect the system’s stability or functionality. Best practices to

The post Dynamics upgrade Testing – Best Practices and Strategies for conducting Regression appeared first on Indium.

]]>
Upgrading to a new version of Dynamics can be a significant undertaking, and it’s crucial to ensure that everything works as expected after the upgrade. Testing is an essential part of the upgrade process, and it should be performed thoroughly to avoid any issues that could affect the system’s stability or functionality.

Best practices to follow when performing Dynamics upgrade testing:

 

Plan Ahead:

Before commencing the upgrade process, it is crucial to carefully plan the testing approach. This includes developing a comprehensive test plan that encompasses all the scenarios that need to be tested, including both functional and non-functional aspects. This will guarantee adequate coverage of all critical areas of the system during the testing phase.

It is also important to prioritize the test approach to avoid insufficient or excessive testing. Therefore, it is advisable to consult with the technical expert from the Engineering team initially. The engineer can provide recommendations on any necessary additions or exclusions to the test scope and help finalize the test plan.

Test in a Sandbox Environment:

It is always a good practice to test upgrades in a separate sandbox environment that mimics the production environment. This provides a safe and secure environment to thoroughly test the upgraded system’s features and functionalities. It ensures that any issues discovered during testing do not have an impact on the live system. Furthermore, it allows for multiple iterations of testing without jeopardising data integrity or causing downtime.

Test Data Migration:

If data migration is a part of the upgrade process, it becomes crucial to thoroughly test the migration process itself. It is imperative to ensure that all data is transferred correctly, and that data integrity is maintained throughout the process. This entails verifying that the migrated data appears as expected in the new system and aligns with the intended outcomes.

By conducting comprehensive tests on the migrated data, you can identify any inconsistencies, errors, or discrepancies that may have occurred during the migration. This level of meticulous testing guarantees a seamless transition and minimizes the risk of data loss or corruption, providing confidence in the integrity and reliability of the upgraded system.

Test Third-Party Integrations:

When upgrading the Dynamics system, it is crucial to thoroughly test any third-party integrations that are in place. This testing should cover both inbound and outbound data transfers and verify that the integrations continue to work after the upgrade.

This involves conducting end-to-end testing scenarios that simulate real-world data exchanges between the Dynamics system and the integrated systems. By executing these tests, any potential compatibility issues, data inconsistencies, or disruptions in the information flow can be identified and addressed promptly.

By thoroughly testing third-party integrations, the upgraded Dynamics system can maintain robustness, reliability, and seamlessness in its operation.

Conduct Regression Testing:

Regression testing is a vital component of the upgrade process as it aims to verify that the upgraded Dynamics system does not introduce any new issues or bugs while maintaining the previously working functionality. This type of testing ensures that all the previously working functionality is thoroughly tested to validate its proper functioning in the upgraded environment.

Automated testing tools and frameworks can significantly aid in performing regression testing efficiently by automating repetitive test cases and facilitating test coverage. This approach allows for quick and consistent execution of test cases, reducing the testing effort and providing faster feedback on the system’s functionality.

The importance of regression testing lies in its ability to uncover hidden defects, compatibility issues, or unintended consequences of the upgrade process. By identifying and addressing these issues early on, you can mitigate risks, prevent system disruptions, and maintain a high level of user satisfaction.

Overall, regression testing serves as a critical quality assurance measure to validate the stability and correctness of the upgraded Dynamics system. It provides confidence that the system continues to function as expected, preserving the existing functionality, and ensuring a smooth transition for end-users.

Conduct Performance Testing:

Performance testing is critical, especially if the new version of Dynamics introduces changes that may impact system performance. Test the system under varying loads to identify any performance issues and ensure that the system can handle the expected workload without degradation in its performance.

By identifying and addressing performance issues early on, we can optimise the system’s performance, enhance the user experience, and maintain productivity.

Document Test Results:

It’s essential to document all testing activities and results. This documentation can be used as a reference for future upgrades and to ensure that all areas of the system have been adequately tested.

Strategies for conducting regression testing during a Dynamics upgrade:

Dynamics upgrades can be difficult and time-consuming, but regression testing is essential to making sure everything functions as it should. Regression testing allows us to verify that the Dynamics system’s essential features and workflows have not been jeopardised or badly impacted by the upgrade. By enabling seamless functioning for end users, this helps to preserve the system’s dependability, stability, and usability.

Regression testing during a Dynamics upgrade can be done in the following ways:

 

Prioritize Testing – Not all functionality in the Dynamics system is equal, and some features may be more critical than others. Therefore, it’s essential to prioritize the testing of critical functionality during the regression testing phase. Focus on the most frequently used features, Screens & Ribbon buttons that use JavaScript extensively, and any areas that have been impacted by the upgrade. Identify high-risk areas that are more prone to issues due to the upgrade and allocate more testing resources to those specific components. This approach ensures that the most critical and vulnerable parts of the system receive the necessary attention during regression testing. By prioritizing based on risk, we can ensure that any potential issues are addressed first.

Create a comprehensive Regression Test Suite – Develop a comprehensive set of test cases specifically designed for regression testing. These test cases should cover all essential functionalities, workflows, and integration touch points within the Dynamics system. The test suite should encompass both positive and negative scenarios to thoroughly validate the system’s behaviour after the upgrade. Additionally, it should include verification of error messages, warnings, pop-ups, and scroll bar functions. These verifications should be conducted in one area, ensuring that they work similarly in other areas as well.

Leverage Automated Testing Tools – Utilize automated testing tools to streamline the regression testing process. Automated tests can significantly speed up the execution of test cases and help identify any unexpected issues efficiently. Additionally, they allow for easier retesting whenever new changes or updates are made to the system. With reusable test scripts, we can establish a strong foundation for ongoing regression testing throughout the lifecycle of the Dynamics system. This saves time and effort by eliminating the need to recreate test cases from scratch.

Test Integrations – Dynamics systems often integrate with other software, such as financial systems, marketing automation software, or customer relationship management tools. When upgrading Dynamics, it’s important to test these integrations thoroughly to ensure that they still function correctly.

Ensure a smooth Dynamics upgrade with comprehensive regression testing. Learn the best practices and strategies for conducting effective testing and minimize the risk of post-upgrade issues.

Click Here

Document and Track Defects – Document and report all issues that are identified during regression testing. This helps ensure that issues are tracked and addressed before the system goes live. Use a formal bug-tracking system to manage issues and ensure that they are addressed in a timely manner.

Consider using a Phased Approach – A phased approach to regression testing can help ensure that all functionality is tested thoroughly. This involves testing the most critical functionality first and gradually expanding testing to cover less critical functionality. This approach can help identify and address issues early in the testing process.

Involve Key Stakeholders – End-users can provide valuable feedback during regression testing, as they can identify issues that may not be apparent to testers. Consider involving end-users in the testing process, or conducting user acceptance testing (UAT), to ensure that the system meets their needs. Their input and feedback can provide valuable insights into the expected behaviour of the system and help identify any deviations or issues that may arise during the upgrade.

Use Version Control – Version control is essential during a Dynamics upgrade to ensure that changes are tracked, and that the system can be rolled back if necessary. Use version control software to track changes and ensure that all testing is performed on the correct version of the system.

Establish a Testing Timeline – It’s important to establish a testing timeline and ensure that all testing is completed before the system goes live. This includes allowing time for testing, bug fixing, and retesting. The testing timeline should be communicated to all stakeholders to ensure that everyone is aware of the testing schedule.

Use a Testing Checklist – A testing checklist can help ensure that all testing is performed consistently and that nothing is missed. The checklist should include all test cases and scenarios that need to be tested, as well as any issues that have been identified and need to be addressed.

Do you also want to optimize your Dynamics upgrade process with effective regression testing, then book your free demo with us now.

Click Here

In conclusion, Dynamics upgrade testing can be challenging, but organizations can overcome common challenges with the right approach and ensure a successful upgrade. Regression testing is an important part of the Dynamics upgrade process, and there are several strategies that organizations can use to conduct effective regression testing. By testing integrations, using a phased approach, involving end-users, establishing a testing timeline, and using a testing checklist, organizations can ensure that their Dynamics upgrade is successful and that the system functions correctly. By investing time and resources in regression testing, organizations can minimize the risk of issues arising after the system goes live and ensure that the upgrade is a success.

The post Dynamics upgrade Testing – Best Practices and Strategies for conducting Regression appeared first on Indium.

]]>
Best practices in preparing gherkin feature file https://www.indiumsoftware.com/blog/best-practices-in-preparing-gherkin-feature-file/ Tue, 27 Jun 2023 05:57:29 +0000 https://www.indiumsoftware.com/?p=17197 Test Driven Development (TDD) is a test development process where the software requirements are broken down into smaller units, and create tests for each unit before developing the software. In the TDD approach, developers first create tests for each functional unit and then develop software to ensure that all the tests are handled in the

The post Best practices in preparing gherkin feature file appeared first on Indium.

]]>
Test Driven Development (TDD) is a test development process where the software requirements are broken down into smaller units, and create tests for each unit before developing the software. In the TDD approach, developers first create tests for each functional unit and then develop software to ensure that all the tests are handled in the code.

Behavior Driven Development (BDD) is another test development process derived from TDD. In BDD, testers create tests for the user behaviors. The Gherkin file is used for creating scenarios for end-to-end tests.

TDD vs. BDD Comparison

Parameters TDD BDD
Members involved Developer Developer, QA (or) Customer
The language used for creating tests. Leverage to define tests in plain language (English) Using programming language (E.g., Java, Python, etc.) Using Gherkin / Simple English
Who will create and use tests?

Only developers are able to write and use tests.

Only programmers with programming language knowledge can understand it.

Anyone involved in the project can define tests, and developers and testers can put them into practise.

Everybody working on the project can comprehend.

Used in the phase of Unit Testing Requirement understanding/E2E and Regression testing.
Suitable for Project Suitable for projects that do not have end users. If there is no dedicated QA team for testing. Suitable for projects with large customers. If the project with complex functionalities wanted to document all the user requirements in a readable format, Project with a better user experience.
Tools JUnit, TestNG, Pytest, Rspec, csUnit & NUnit etc. Cucumber, Specflow, Behave, JBehave, Behat & Concordion etc.

What is Gherkin?

Gherkin is a simple structured language that helps to create behavior-based tests using simple English, file can be read and understood by anyone in the software development team.

Gherkin is mostly used to define the tests to cover the end-to-end workflow. Writing scenarios in an independent/standalone way without creating dependency on each other is important.

Each scenario in a Gherkin file will be considered a standalone test.

Gherkin files should be saved with a .feature extension to automate the scenarios.

Many assume that writing feature files in the Gherkin language is easy, but it’s not. Though Gherkin uses simple syntax (Given, When, And, Then…), it has to be used wisely to define meaningful steps. It’s moreover like an art.

Preparing Gherkin is not just documentation work. A person involved in preparing Gherkin files has more responsibility as they are converting system requirements into system behaviors from a user perspective.

Take action now! Improve collaboration, automation, and software quality by implementing best practices in Gherkin feature file preparation. Start today!

Click here

Best practices in preparing gherkin feature files.

Avoid lengthy descriptions.

We must keep the Title & Description short; when the description is long, the readers may skip the lengthy description.

Scenario Background.

The scenario Background adds some context to each scenario. It is like a pre-requisite for scenario execution. Though writing Background is not mandatory like scenarios and steps, they can throw more context on the test and reduce the repetition of steps as they can act as pre-condition.

If there is no Background detail to add to any scenario, we may add some additional detail that are required or help to complete executing the scenario.

For Example:

  • Test Data,
  • Experiment files used for this test,
  • Database connection,
  • Other interface connectivity,
  • Etc.…

Scenario.

Prepare scenarios using clear statements as we document the requirements in user behavior format. Project stakeholders can access these documents for various reasons, such as to read and understand the requirement, to automate the tester to develop a script, and for future reference.

Project members have different levels of application understanding. We must consider this and prepare Gherkin files from lay man’s perspective.

Preparing scenarios at a high level (or) using too many statements is not recommended. Prepare only apt scenarios and steps.

The scenario is high level.

Scenario with too many statements.

Scenario with apt statements.

Pass accurate input parameters.

Always mention the accurate values in the input parameters, sometimes, the results may vary, and our scripts will fail. By passing accurate values, we can save much time, such as failure analysis, script rework, re-execution, etc.

Scenario Outline?

Using “Scenario Outline” is always good, especially when testing with N input parameters of similar workflow.

There are many advantages when we use the syntax “Scenario Outline.

  1. We can pass many parameter values using the same parameter key.
  2. The number of statements in the Gherkin file will reduce.
  3. Readability is good.
  4. Easy to maintain when there is an update or rework.

Using “Scenario” syntax:

Using “Scenario Outline” syntax:

Avoid Technical Terms.

Do not use any technical term (or) coding language in the Gherkin file, as it may not be understood by a non-technical person in the project.

Use the present tense

There is no such rule to prepare a Gherkin file only with the present tense, but I would suggest using the present tense wherever it is possible. It will keep the readers engaged with the application.

Statements with past/future tense.

Statements with the present tense.

Maintain the same naming convention.

We should maintain the same naming convention in all the scenarios inside the Gherkin file. Also, try to maintain all the Gherkin files in the project.

Add more negative scenarios.

This is a common rule applicable to any kind of testing. It is always good to have more negative scenarios in your test scripts. It helps us to validate the application, especially in negative situations. Like how effectively the application is designed, developed, and built to handle unexpected user behaviors.

For instance, does the application show a proper error message to users in unexpected workflow, how quickly can an application recover, any data loss, etc.

Alignment

We must ensure that all the lines are properly aligned inside a Gherkin file.

Always prepare the Gherkin file perfectly, as this will be the first impression to others about the document and its quality and content. It creates interest in the readers and helps them to complete reading quickly.

Avoid Spelling Mistakes

Not a major mistake, but spelling mistakes should be avoided in the Gherkin file. It will create a bad impression on the readers. Keep the content of the file simple without mistakes.

Make sure we enable the spell-check option while working on Gherkin files.
Sometimes spelling mistakes may not be highlighted in red lines, which means we haven’t installed the spell check extension and must install and enable it.

For Ex: “Code Spell Checker” & “Spell Right” extensions.

Process-related best practices can also be considered in the project.

  • Gherkin file review/approval process.
  • Collect all the Gherkin files and place them in a common repository.
  • The Gherkin file should be updated whenever there is a change.
  • Provide access rights to everyone in the project.
  • Read-only access to the Gherkin folder (except leads/manager) 
  • Maintain proper folder structure.
  • Maintain version controlling.
  • Periodic backup and maintenance.
  • We can involve the functional team in preparing Gherkin files if the automation team fully engages with scripting work.

Enhance collaboration, automate better, and deliver high-quality software. Start implementing best practices in Gherkin feature file preparation today!

Click here

Conclusion

In conclusion, following best practices in preparing Gherkin feature files is crucial for effective and efficient software development. By adhering to guidelines such as writing clear and concise scenarios, using descriptive language, organizing scenarios with proper structure, and involving stakeholders in the process, teams can improve collaboration, facilitate test automation, and ensure the delivery of high-quality software. Implementing these best practices empowers teams to create feature files that are easily understood, maintainable, and valuable assets throughout the software development lifecycle.

The post Best practices in preparing gherkin feature file appeared first on Indium.

]]>
Data Masking: Need, Techniques and Best Practices https://www.indiumsoftware.com/blog/data-masking-need-techniques-and-best-practices/ Wed, 17 May 2023 06:55:20 +0000 https://www.indiumsoftware.com/?p=16821 Introduction More than ever, the human race is discovering, revolving, and revolving. The revolution in Artificial Intelligence Domain has brought the whole human species to a new Dawn of personalized services. With more people adapting to the Internet, demands of various services in different phases of life are increasing. Let’s consider the case of Covid

The post Data Masking: Need, Techniques and Best Practices appeared first on Indium.

]]>
Introduction

More than ever, the human race is discovering, revolving, and revolving. The revolution in Artificial Intelligence Domain has brought the whole human species to a new Dawn of personalized services. With more people adapting to the Internet, demands of various services in different phases of life are increasing. Let’s consider the case of Covid Pandemic, the demons are still at war with. In the times of lockdown, to stay motivated we have used Audio Book applications, video broadcasting applications, attended online exercise, Yoga, even Consulted with Doctors through an Application. While the physical streets were closed, there was more traffic online.

All these applications, websites, which we have used, have a simple goal and that is a better service to the user. To do so, they collect personal information directly or indirectly, intentionally or for the sake of betterment. The machines, despite their size starting from laptops to smart watches, even voice assistants are listening to us, watching us every move we made, every word we uttered. Albeit their purpose of doing so is noble, but there’s no guarantee of leakage-proof, intruder-proof and spammers-proof data handling. According to a study by Forbes, on average there are 2.5 quintillion bytes of data generated per day, and this data is increasing year by year exponentially. Data Mining, Data Ingestion and Migration phases are the most vulnerable phases for potential data leakage. The surprising news is the cyber-attacks also happen at a rate of 18 attacks per minute. More than 16 lakh cybercrimes happened in last 3 years in India only.



Need of Data Masking

Besides the online scams and frauds Cyber Attacks, data breaches are major risks to every organization that mines personal data. A data breach is where the attacker gains access to millions to billions of people’s personal information like bank details, mobile numbers, social service numbers, etc. According to the Identity Theft Resource Center (ITRC), 83% of the 1,862 data breaches in 2021 involved sensitive data. These incidents are now considered as equipment of modern warfare.

Data Security Standards

Based on the countries and regulatory authorities there are different rules that need to be imposed to protect sensitive information. European Union States promotes General Data Protection Regulation (GDPR) to protect personal and racial information along with digital information, Health records, biometric and genetic data of individuals. United States Department of Health and Human Service (HHS) passed Health Insurance Portability and Accountability Act that protects and promotes security standards for Privacy of Individually Identifiable Health Information. International Organization for Standardization and the International Electrotechnical Commission’s (IOS/IEC) 27001 and 27018 security standards promote confidentiality, integrity and availability norms for Big Data organizations. In Extract Transform and Load (ETL) services, Data Pipeline services or Data Analytics services sticking to these security norms are crucial and liberating.

Different Security Standards

Read this insightful blog post onMaximizing AI and ML Performance: A Guide to Effective Data Collection, Storage, and Analysis

Techniques to Protect Sensitive Data

All the security protocols and standards can be summarized into three different techniques: Data De-Identification, Data Encoding and Data Masking. Data De-identification is used to protect sensitive data by removing or obscuring identifiable information. In De-identification technique the original sensitive information will be anonymized i.e., to completely remove those records from database, pseudonymization i.e., to replace the sensitive information with aliases, and lastly the aggregation where data will be grouped and summarized and then will be presented or shared rather than sharing the original elements.

In de-identification the original data format or structure may not be retained. Data Encoding refers to the technique of encoding the data in cyphers which can later be decoded by authorized users. Various encoding techniques are Encryption – key based encryption of data, Hashing – Original data will be converted to hash values using Message Digest (md5), Secure Hash Algorithm (sha1) or BLAKE hashing, etc. In other hand Data masking is the technique of replacing the original data with factious or obfuscated data where the masked data retains the format and structure of original data. All these techniques do not fall into a particular class or follow a hierarchal trend. They are used alone with one another based on the use cases and the cruciality of the data.

Comparative abstraction of major techniques

Data Masking is of two types i.e., Static Data Masking (SDM) and Dynamic Data Masking (DDM). Static Data masking involves replacing sensitive data with realistic but fictitious data with the structure and format of original data. Static Data Masking involves substitution – replacing the sensitive data with fake data, Shuffling – Shuffle the data in a column to manipulate original value and its references, Nulling – Sensitive data will be replaced with Null values. Encryption – encryption of sensitive information, Redaction – partially masking the sensitive data where only one part of the data is visible. Whereas Dynamic Data Masking involves Full masking, partial masking – Mask portion, random masking – mask at random, conditional masking – mask when a specific condition is met, Encoding and Tokenization- convert data to non-sensitive token value that preserves the format and length of original data.

SDM masks data at rest by creating a copy of an existing data set. The copied and masked data can only be used to share in analysis and production teams. Updates to the original data do not reflect in masked data until a new copy is made whereas DDM masks data at query time. The updated data also comes in masked format because of the query. The liveness of data remains intact without worrying about data silos. SDM is the primary choice of data practitioners as it is reliable and completely isolated original data. In other hand, DDM depends on query time masking which poses a chance of failure at some adverse instances.

SDM vs DDM

Data Masking Best Practices

Masking of sensitive data depends on the use case of the resultant masked data. It is always recommended to mask the data in the non-production environment. However, there are some practices that need to be considered for secure and fault-tolerant data masking.

1. Governance: The organization must follow common security practices based on the country it’s operating in and the international data security standards as well.

2. Referential Integrity: Tables with masked data should follow references accordingly for the purpose of join while analyzing the data without revealing sensitive information.

3. Performance and Cost: Tokenization and Hashing often convert the data to a standard size which may be more than actual size. Masked data shouldn’t impact the general query processing time.

4. Scalability: In case of big data the masking technique should be able to mask large dataset and stream data as well.

5. Fault-tolerance: The technique should be tolerant to minimal data ugliness like extra space, comma, special characters etc. By scrutinizing the masking process and resultant data often helps to avoid common pitfalls.

Protect your sensitive data with proper data masking techniques. Contact us today to get in Touch.

Click here

Conclusion

In conclusion, the advancements in technology, particularly in the domain of Artificial Intelligence, have brought about a significant change in the way humans interact with services and each other. The COVID-19 pandemic has further accelerated the adoption of digital technologies as people were forced to stay indoors and seek personalized services online. The increased demand for online services during the pandemic has shown that technology can be leveraged to improve our lives and bring us closer to one another even in times of crisis. As we continue to navigate the post-pandemic world, the revolution in technology will play a significant role in shaping our future and enabling us to live a better life.

 

The post Data Masking: Need, Techniques and Best Practices appeared first on Indium.

]]>