Introduction to Hyper-V for software testers

Written by: Łukasz Wysocki, Software Tester

In every tester’s life, there may come the time, when an additional, simple environment is necessary, be it an isolated one or not. Especially if you work in Windows and running Docker may take too long or be discouraged for some reasons. The easiest solution in such a situation seems to be the Hyper-V service and creating a virtual machine. In this article, you will learn: 

  • how to run the Hyper-V service on a machine with Windows 10 operating system, 
  • how to create a Windows or Linux virtual machine 
  • how to set up the network connection for running virtual machines.

On my daily basis I work with my team on software testing for various clients.

You have to know, that planning software testing with The Microsoft Hyper-V service, will be reduced in functionality when run on the client version of Windows 10, compared to the one intended for server purposes, i.e., Windows Server. 

Before starting the Hyper-V functionality, it is advisable to check if the computer to carry your virtual machines supports hardware virtualization and the option is enabled. Depending on the processor architecture, this will be the SVM mode of the AMD group and Intel Virtualization Technology for Intel processors. You will find the option in the BIOS / UEFI options.

The next step is to check if the operating system version supports Hyper-V at all. To verify this, type systeminfo in the command prompt window. In the response received, the most important fields to verify are: 

  • OS Name: Microsoft Windows 10 Enterprise – 

Hyper-V is supported in Windows 10 in the Enterprise, Pro and Education editions. To run Hyper-V in the Home version you would need to upgrade it (Settings > Update and Security > Activation), 

Hyper-V Requirements:  VM Monitor Mode Extensions: Yes 

Virtualization Enabled In Firmware: Yes 

Second Level Address Translation: Yes

Data Execution Prevention Available: Yes 

If all requirements are met, you can run Hyper-V on the machine. You can do this in several ways: 

  • Using powershell: 

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All 

  • using the command line and the DISM tool: 

DISM /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V 

  • using the GUI: 

Select Control Panel\All Control Panel Items\Programs and Features, then click on Turn Windows Features on or off and select Hyper-V from the list of available components. The components needed to run Hyper-V will be selected automatically. Finally, confirm by clicking OK.

Niezależnie od sposobu uruchomienia funkcjonalności Hyper-V dla Windows 10, będzie ona dostępna dopiero po restarcie systema

Regardless of how the Hyper-V functionality is run, it will be available only after a system restart. 

To start creating a virtual machine, start Hyper-V Manager, and then, on the right side of the window, select New and then Virtual Machine. u.

Aby rozpocząć proces tworzenia maszyny wirtualnej należy uruchomić Hyper-V Manager, a następnie, po prawej stronie okna wybrać New i dalej Virtual Machine.

The Virtual Machine Creation Wizard will open. Confirm the first window by clicking Next. In the next window, type in the name of the virtual machine and, possibly, its location on the drive. It is good to select the Store the Virtual machine in a different location option. This will give more control over virtual machines.

Confirm your selections by clicking Next. In the next dialog window, select the Generation 2 option and confirm your selection by clicking Next.

In the next step, you must specify how much RAM the virtual machine will have. You can leave the Use Dynamic Memory for this Virtual Machine option selected. The next dialog window allows you to configure the network. As we’ll consider it in later in the article, you can click Next without making any changes. 

In the next screen you’ll configure the virtual drive for running a virtual machine. Enter the path to a new virtual drive or add an existing one to the virtual machine.

Next, you will be allowed to attach the ISO image of the operating system to be installed on your virtual machine. After confirming your choices, a summary of the future virtual machine parameters will appear. Click Finish to finalize the procedure. You can now see the new virtual machine in Hyper-V Manager.

The next step is to configure network interfaces for the virtual machine. You can do it by configuring a Virtual Switch. It is the link between the physical host interface and the virtual machine interface.  

To create it click Virtual Switch Manager in the Actions panel. In the newly opened window select the New virtual Network switch option, then type in the name of the new Virtual Switch and the physical interface of the host to which it is to be connected.

Confirm your choices by clicking Apply. Now, create a virtual interface that will be connected to the Virtual Switch. To do this, select the virtual machine and then click Settings. The settings window for the virtual machine will open. In the side panel, select Add Hardware, and then from the available options – Network Adapter.

Next, select the newly created Network Adapter and the appropriate Virtual switch for it.

After configuring the virtual machine and connecting to the network, the only thing left to do is start your virtual machine. To do this, select it, click Connect, and you can start by selecting the Start command from the Action menu or by pressing Ctrl + S.

Then it works like a normal machine, so you need to install the operating system.

After installing the Operating System, the virtual machine is ready to use.

The only major difference in creating a VM in the case of a virtual machine with Linux operating system (e.g. Ubuntu) is the necessity to choose generation 1. The other steps are made in the same way as in the case of a virtual machine with Windows operating system. 

There is also an easier way to create virtual machines. To use it, select the Quick Create option in the Hyper-V Manager’s main window. Then you choose the available operating system, virtual machine name, and which Virtual Switch is to be connected to the created machine. Then you approve your choices by clicking Create Virtual Machine. The other parameters of the virtual machine (disk save path, RAM, etc.) are automatically determined by the program and can be edited after the virtual machine is created.

When the creation is complete, a summary window will appear. From there, you can go to the connection to the virtual machine or edit its parameters.

After connecting to the virtual machine, you need to install the operating system and you can enjoy a Linux virtual machine running on a Windows host.

Using the Quick Create option is useful for beginners or those who don’t want (or don’t have time) to dive into the process of creating a virtual machine. However, going through the entire virtual machine creation gives you more control over the created machine and its parameters.

Was this helpful? Like our profile on Facebook or LinkedIn and stay informed about our other interesting articles.  

If you want to join our testing team, click here!

Literature: Introduction to Hyper-V on Windows 10

Latest blog posts

The test result metric. A practical example.

Written by: Michał Zaczyński, Tests Manager

Metrics – a „must-have” when reporting to the Project Manager for some, a dreaded tool of repression for others. Yet what they should be, for a developer team, is a tool and support. 

Intelligently gathered metrics can be very useful in diagnosing problems with product quality and functioning of the entire project, or a team. For this reason, one of the basic metrics that should be used in every project is the metric of trends in test result changes. Its use does not require much effort, and yet it allows to observe certain aspects of the process that show only in the context of a certain period. Here, there are several real-life examples of this metric, based on automatic test results, used to find and solve various problems. They show how helpful it is to monitor the test result trends in assessing software quality. 

Case 1: When right after the release everything stops working

1.95 was the last release version prior to merging the changes from a separate branch, the one developed as the 2.x version. These changes came to the testing team as a surprise, as the test_error result happening to most automated tests might suggest. What is more, it is clear that until the 1.3 version, some defects were “hidden” by test_error and could not be detected, nor repaired at that point. 

The reason: faulty cooperation between the developer and tester teams. 

The result: a period of non-operational automatic testing, late detection of old defects 

Case 2: When new tests are created all the time, but nobody cares for the ones already made.

The effect of a constant number of test_error results. 

It can be seen that although for versions 2.104 through 2.106 new errors appeared in automatic tests, they were corrected and the number of test_error results came back to its constant value (10-20%). 

The reason: the team concentrates on creating new tests and does not delegate anyone to supervise the previously developed tests implemented in the test cycle. 

The result: the team gets used to a certain number of tests failing and thus allows some areas of the software to remain untested.

Case 3: When everything seems to be fine

For over a month no test_error results were recorded, although many changes were introduced by developers at this time. The only tests that failed were those due to application defects.  

  

The reason: the automation area is not aligned with the development area of the software – the testing team did not prepare automated tests for the area actively developed at that time.  

The result: the false conviction that the application is well covered by tests in all areas and the tests are “self-sufficient.”  

Case 4: When there are no serious problems

The sporadic test_error results are repaired quickly, and their number always comes back to 0. The number of failed tests due to software defects remains below 10% most of the time, which is usually considered an acceptable pre-release value. 

A proper testing team reaction can be seen when the number of test_error results grows rapidly at version 20.352 and quickly reverts to the default value at version 20.354, which means the faulty tests have been corrected. At this stage, no significant problems are identified. 

 Of course, in some cases, the test result trends will not suffice to draw any rational conclusions. Still, this metric is always a good source of general information on the project and the areas of the process which could use additional analysis.

Each day we perform thousands of automated tests. We can also test your software. Write to us!

Latest blog posts

How to analyze pharmaceutics market data in the cloud. Learn the technical details of our solution.

Written by: Marketing Team

In April 2020, we took part in the event of opening a Google Cloud region in Poland. Michał Zieniewicz, the Cloud Architect from Solwit talked to Michał Górski, a big data developer from Farmaprom. They were discussing technical details of the cloud solution implemented by Solwit and changes that cloud technologies brought to the company and its image among its clients.

Do you prefer to watch the video? You will find the link at the end of the article. You can also read full case study: Real-time analysis of pharmaceutical market data 

Michał Zieniewicz, Solwit: Good morning and welcome to the session on real-time data analysis and the possibilities provided by BigQuery. My name is Michał Zieniewicz. I am the manager of the Cloud and Integration Services Business Unit in Solwit. I am responsible for developing the cloud portion of our organization and our employees’ cloud competencies. Solwit was founded exactly 10 years ago and since then we have always supported the cloud. We also develop and test business software, support our clients in the areas of artificial intelligence, analytics, data warehouses, and cloud environment optimization. We hire about 350 specialists, Google-certified architects and developers included.

Present with me today is Michał Górski from Farmaprom. Hi Michał.

Michał Górski, Farmaprom: Hi Michał. My name is Michał Górski and I work in Farmaprom as a Big Data developer. Our responsibility is to integrate multiple subjects in the Polish pharmaceutical market. These are pharmaceutical manufacturers, wholesalers, and pharmacies. 

Michał Zieniewicz, Solwit: Ok, we have dealt with the formalities, we can move on to the topic of our meeting. Tell us… how did Farmaprom operate before its “Google era?”

Michał Górski, Farmaprom: Before we moved to the Google cloud we used to experience a lot of problems. As the company grew, new data sources were cropping up and it was difficult to integrate them. Frankly, we didn’t have a good way of linking the sources with each other.

Michał Zieniewicz, Solwit: What did you use all this data for?

Michał Górski, Farmaprom: We had two primary goals. One was to maintain a data warehouse for ourselves and for our clients. The second one was to provide our analytics department with tools making it possible to generate reports for clients and prepare market analyses.

Michał Zieniewicz, Solwit: Ok, Let’s start from the beginning. What data do you process in Farmaprom?

Michał Górski, Farmaprom: We have two kinds of data: sell-in information produced where the manufacturer, pharmaceutical wholesaler, and the pharmacy operations meet; and seel-out information produced from the client-pharmacy interaction when the receipt is realized.

Michał Zieniewicz, Solwit: Ok, let’s talk about the sell-in part. What does your ETL process look like?

MG: We don’t have ETL. We have ELT. The data source is our CRM, which is a MySQL database. Debezium is responsible for Change Data Capture and the data flows directly to Kafka, using the topic-per-table scheme.

Michał Zieniewicz, Solwit: Why don’t you use Pub/Sub?

Michał Górski, Farmaprom: We started developing this pipeline a few years ago when Pub/Sub offered quite limited options. Data retention was no longer than a week if I remember correctly – after being read and ACKnowledged the data vanished from the subscription. Now it is much better but I guess retention is still monthly at best. Also, the data can be recovered now, but then it wan not possible. Finally, Debezium itself forced us to use Kafka.

Michał Zieniewicz, Solwit: So, let’s get back to the ELT process. How is data moved from Kafka to BigQuery?

Michał Górski, Farmaprom: Firstly, we use the schema registry in Kafka a lot. What schemes coming from our database we register is reflected by what tables appear in BigQuery. The data from Kafka is loaded to GCS, AVRO files are created and loaded to BigQuery, to proper tables. We upload those files quite often because the maximum delay between the CRM and the data warehouse is about 8 minutes. Partitioning is also thrown in, but we also partition data monthly. BigQuery allows daily partitioning, so we assign, for example, all the data that comes from September to Sept. 1st, and we don’t create too many partitions if there is not that much data in the table.

Michał Zieniewicz, Solwit: Why don’t you load it directly, but use GCS files instead?

Michał Górski, Farmaprom: Uploading is free and streaming is not. There are also no bandwidth limits when uploading. Streaming does have those limits and sometimes we bump into them. On the other hand, the eight-minute delay is acceptable in our business.

Michał Zieniewicz, Solwit: Alright, we have discussed the extract and load stages. But what about the third element of the process – transform. 

Michał Górski, Farmaprom: We have this raw AVRO from Debezium loaded to BigQuery, to the right tables. This data is hard to analyze because there is information on what the record looked like before and after the change, and metadata regarding, e.g. the Binlog. These are not nice things to analyze. We use certain view layers to process this information. The first such view is history.

For example, if we have a table with orders which has changed multiple times and is now in 10 versions, we combine the tables imposing the most recent scheme. Seeing that the given key came on Monday, Wednesday, and Friday, we can calculate when the given record was valid in such and such version, using the Partition over lead lag function. We create the full table including the history of all its records and with such a view in place, we can define its conditions for the current point in time – we get the currently valid records only. We can also define it for those valid at midnight the day before so that the information flowing in does not change the reports. We can utilize partitioning and then limit the data, for example, to the two recent years. Not every analysis needs to go back as far as 15 years. So we save some costs. We have about ten such view setups, maybe more than a dozen, and they allow our analysts to review the data the way they find the most convenient. 

Michał Zieniewicz, Solwit: It all sounds pretty complicated. Have you had any problems with it?

Michał Górski, Farmaprom: Yes, we used to have one problem with this solution. When the query is too complex for BigQuery, it will not be executed. And sometimes the analysts wrote querries for these views and every such view, a table, is under union (multiple table versions). It is not a matter of the amount of data, but the complexity of the query – the “query too complex” error.

So, to handle this problem we perform manual materialization of the views, that is merging them to the most recent version. Moreover, we add the incoming data on the fly, so we don’t have ten versions stuck together, but one snapshot and whatever has come in since its creation.

Michał Zieniewicz, Solwit: That’s clever. And tell me, is it the only pipeline in the sell-in data?

Michał Górski, Farmaprom: No, we have more, for example, the HDM data describing the doctor information. But the way they work is quite similar in all cases.

Michał Zieniewicz, Solwit: And what about the sell-out data? Are they different? Are they processed differently?

Michał Górski, Farmaprom: Yes. This pipeline is newer, so we have decided to use Pub/Sub. On the one hand, the source is MS SQL, and on the other, it is directly the software employed in apothecaries. So there’s no need to use Debezium, which, again, would force us to use Kafka. Besides, Pub/Sub has just increased data retention.

Beyond that, the way the pipeline works is similar, because we read data from the subscription itself and use it to build a file, also on GCS. When we know that the file is ready and everything has been saved properly and the data may be uploaded, we ACKnowledge it in Pub/Sub and confirm its reception. Pub/Sub gives us 600 seconds to do so, so in theory, the delay should be no more than ten minutes. Even if we cross this threshold, it is not a problem – we can read this again. But it virtually never happens.

Michał Zieniewicz, Solwit: Right. Now all your data is in BigQuery. What next?

Michał Górski, Farmaprom: First, we can share it with our analytics team to generate reports for our clients. Secondly, we use Click House as a sort of a front for BigQuery. BigQuery calculates data marts, which we load directly to Click House. This is the basic analytics for our clients: budget usage and plan realization.

What is more, the most important thing is the goal – the dedicated data warehouse. Every client can say: “I would like to have a data warehouse” and we actually create this warehouse for them. In the past, we used Oracle Business Intelligence, but our clients used to tell us that they didn’t want our BI and preferred raw data. These clients were large pharmaceutical companies and had their own BIs, sometimes more than one per company, so they didn’t need more. We wanted to meet their expectations, so we tested Spark. Calculating a basic sales mart took Spark about 14 hours. And it wasn’t Spark installed on my laptop, it was a cluster of ten solid computers. Not a big one, but still a cluster. Later we uploaded the data to BigQuery and the 14 hours turned into three minutes.

Michał Zieniewicz, Solwit: An amazing result.

Michał Górski, Farmaprom: Yes, this was the ultimate argument, so now we create such projects for our clients. Everyone who wants to have a data warehouse gets a separate GCP project, all the required data is uploaded and integrated. To show our clients this solution’s capabilities we also started creating Data Studio dashboards. At that point, our clients’ views on Oracle changed drastically. Everyone who was using or testing our solutions wanted to have our dashboards. Today, our clients don’t say “we want no BI, no additional visual layer,” they say “give us dashboards, give us Data Studio.”

Michał Zieniewicz, Solwit: It shows how technology changes the client’s perspective regarding data. Amazing. Does this approach towards data warehouse result in any problems?

Michał Górski, Farmaprom: Yes, it does, because every time a client declares they want to use Data Studio dashboards, we inform them that it requires Google accounts. Usually, at that point, the Polish branch calls the IT headquarters of the company and requests such accounts. And the answer is always the same – no way. Then we come in and try to solve the problem and explain what needs to be done. We also test different solutions, such as Superset, which can eliminate the need for Google accounts. We have tested Data Studio connectors and logging via service accounts, not to create new ones. Currently, we are verifying what Workforce can offer us, and maybe together with Data Studio embedding on our part, it will prove a good solution. Long story short – Google accounts for our clients are going to be created but on the fly. There are several solutions, we will certainly choose what works best.

Michał Zieniewicz, Solwit: Has the fact, that your data is processed so fast opened any new business opportunities? Does this technology allow you to do anything you couldn’t do before?

Michał Górski, Farmaprom: Oh, there are a lot of such examples. Let’s take shopping carts (the information from prescription receipts. Before, their analysis was very limited, both quantity and time-wise. It took several hours of our MS SQL 32-core server’s work.

Michał Zieniewicz, Solwit: It wasn’t just any server was it?

Michał Górski, Farmaprom: Right, but still, BigQuery does a thorough analysis of the shopping carts in a matter of seconds.

Michał Zieniewicz, Solwit: You could say it is done in real-time. Seconds compared hours – that’s what I call improvement.

Michał Górski, Farmaprom: There is also an interesting bonus we have got from this whole Google cloud experiment. Working with it was so enjoyable that we have just finished migrating our entire infrastructure. Now it is not only our BigData division that works in the cloud, but the whole Farmaprom.

Michał Zieniewicz, Solwit: 100% in the cloud! Awesome.

Michał Górski, Farmaprom: Yes, it is.

Michał Zieniewicz, Solwit: Ok, to sum up: Google technologies allow Farmaprom to realize their business goals, the current as well as the future ones, and as a bonus it helped you to develop and learn about new possibilities introduced by new technologies.

Do you want to move your business to the cloud, just like Farmaprom did? Drop us a line!

Latest blog posts

Software Quality – what is it all about?

Written by: Piotr Strzałkowski (Embedded Domain Manager)

I have met with a wrong understanding of software quality far too many times, both in general terms and particularly in the field of embedded systems. In this article, I would like to deal with some myths surrounding this subject and make the notion more clear for everyone interested in developing and testing embedded software. So, what is software quality?

The definition depends on who we ask

The definitions may vary wildly depending on whom you ask. Specialists often assume the ones that fit their particular areas of expertise. For example, a developer might say it means clean code with a cohesive naming convention, consistent formatting throughout the project, and possibly no coding errors. A UI designer, on the other hand, will probably focus on a clean, efficient, frontend, characterized by high accessibility and modern visual themes. So which approach is the right one?

The definition according to standards

The actual definition can be found in the most recent version of the system and program engineering ISO 25010 standard, which gives the following image:

As we can see, software quality has been described as a set of characteristics and sub-characteristics required for a piece of software to be considered of high quality.

It is important to remember that following the highest metrics blindly for all of the features may lead to a dire consequence, and, as always, a reasonable balance is the best choice. For example, in embedded systems, not all of them can be met when the system is not based on any operating system or does not co-operate with other systems.

The following is a list of all terms provided by the standard and a short description for each of them. Note, that the degree to which the characteristics are met, is always related to the specified requirements.

  • Functional suitability – represents how the software functions.
  • Functional completeness – how well it complies with its specification.
  • Functional correctness – how correct and precise its results are.
  • Functional appropriateness – how well it performs its assumed function.
  • Performance efficiency – represents how well the software utilizes the resources it is given.
    • Time behavior – how good its response time, processing time, and throughput rates are.
    • Resource utilization – how close the amount of resources used meets the specification.
    • Capacity – how close the maximum limits of its parameters are to the assumed requirements.
  • Compatibility – represents how well the software can exchange information with other systems or components if using the same infrastructure.
    • Co-existence – how efficiently it works when sharing the environment and resources with different products, without harming them.
    • Interoperability – how efficiently multiple products can exchange information and use it.
  • Usability – represents how well the software can be utilized by certain users to achieve certain results.
    • Appropriateness recognizability – how well it can be recognized by users as useful for their purposes.
    • Learnability – how easy and safe it is to learn how to use it. 
    • Operability – how easy and intuitive it is for the user to operate and control. 
    • User error protection – how well it protects the user against making errors. 
    • User interface aesthetics – how pleasing and satisfying it is for the user.
    • Accessibility – how well adjusted it is to users with particular conditions and abilities.
  • Reliability – represents how well the software functions over time.
    • Maturity – how well it functions under normal, everyday operation.
    • Availability – how available and operational it is when it is required. 
    • Fault tolerance – how well it operates despite HW and SW errors appearing.
    • Recoverability – in case of a failure, how much data it can recover and how well it re-establishes the proper state.    
  • Security – represents how well the software protects its data while allowing certain users to access certain information.
    • Confidentiality – how well it ensures that data is available only to the authorized users.
    • Integrity – how well it protects its programs and data from unauthorized access and modification.
    • Non-repudiation – how much of its activity is logged, and how much of it can be repudiated later.
    • Accountability – how well it can trace various operations to the entity, such as a user, responsible for the actions.
    • Authenticity – how well it can prove the identity of a subject, such as a user, to be correct.
  • Maintainability – represents how efficiently and safely the software can be modified, adjusted, and developed further. 
    • Modularity – how much of it constitutes separate modules that can be altered without impact on the rest of the system.
    • Reusability – how easy it is to reuse its assets in other systems or utilize in creating other assets.
    • Analysability – how efficient it is to analyze it regarding potential modifications and their impact on the applications, or detection of deficiencies.
    • Modifiability – how extensively it can be modified effectively without degrading the product.
    • Testability – how easy it is to establish test criteria and perform proper tests to verify if they are met.
  • Portability – represents how efficiently and effectively the software can be ported to other SW or HW environments.
    • Adaptability – how effectively and efficiently it can be adapted to new environments.
    • Installability – how efficiently it can be installed and uninstalled in a given environment.
    • Replaceability – how well it can replace a different software product with the same purpose and in the same environment.

How to measure quality – metrics

To assess the quality of given software the project needs to include proper numerical metrics. In this case, the ones described by the standard. But almost every parameter of a project can be regarded as a metric and used for monitoring development. Does it mean we should use all of them to have the highest level of analytics? Of course not, in this case, more does not mean better. It is crucial to keep the rational balance in both the number and type of the introduced metrics, for example utilizing the SMART method of metric selection, which states that a good metric is:

  • specific – relates directly to the product quality characteristic,
  • measurable – allows the product quality characteristic to be described in numbers,
  • attainable – assumes values possible to achieve in the assumed time,
  • relevant – important for the project or the organization, from the short, as well as the long-term perspective,
  • time-bound – with rational time constraints.

What metrics to choose 

If we know what metrics are, it is enough to pick some of them. Easier said than done. There are many types of metrics describing various aspects of software systems, so choosing the right set for the project is not an easy task. We can measure code test coverage, count the lines of code (CLOC), code errors per module, function resolutions, function executions, functions triggered within a function, we can even measure the rate of comments per file.

The right approach requires balancing between how extensive you want your monitoring to be and how much effort you are ready to make performing said monitoring. What is more, in some cases the assumed numbers for a metric need to take into account the complexity of the solution that needs to be implemented to achieve the given level of quality.

Here are a few examples of metrics representative of certain quality characteristics:

Reliability

the number and severity of code errors

  • MTBF
  • MTTR
Maintainability
  • static code analysis warnings regarding cohesion, structure, and complexity
  • Halstead complexity
  • Mccabe’s cyclomatic complexity
Portability
  • compiler warnings (the highest warning level setting)
  • static code analysis warnings regarding coding standards
Reusability
  • static code analysis warnings regarding the lack of cohesion of methods – LCOM


It is also worth mentioning that one metric may to some extent contribute to multiple quality characteristics.

How to introduce software quality characteristics to the project? 

Code quality is arguably one of the most important areas defining software quality. Therefore, it should be one of our first considerations when going from theory to practice. Here, some metrics are free and easy to use – just turn on the right options in the compiler and prepare the process of their repeated monitoring. Other metrics require implementing additional software, such as CppCheck for static code analysis and revision, and the right configuration to make the monitoring process possible. Both types are well worth using, especially if they can provide more information without additional effort – having static code analysis at our disposal we can monitor the quality of code syntax, but at the same time utilize a coding standard, such as MISRA, as an additional benefit.

The next step would be introducing unit tests and the process for both functional and non-functional software testing, preferably, with the right division into proper levels. Of course, this stage is highly dependent on the size and quality requirements of the project, and everything beyond, even more so. 

Software Quality…

To sum up, a well-adjusted set of techniques, tests, and analyses may provide you with metrics, describing the chosen quality characteristics and sub-characteristics of the system in numbers. Consequently, you gain the ability to monitor the quality of the software under development. But we need to remember that software quality is not just good code with few errors, it is not just a well-drawn graphical interface, nor is it a title we get once our product achieves certain goals. It is constant monitoring and analysis of current parameters and trends throughout the whole development process.

Therefore, including some metrics does mean additional costs, especially if highlighting the importance of the measured areas makes the team more prone to prioritizing them over the others, often against better judgment. Metrics should be introduced only after an in-depth analysis reaching outside of the developer team. It should aim at synergizing the business needs of the product with the maturity and comfort of development itself.

It will also never be independent of the people creating the software, no metric will ever substitute, nor should it, for the developers’ involvement and expertise. A good team is half the battle.

Would you like us to check your software quality? Are you looking for a partner with a well-organized team ready to build embedded software for literally any industry you might need? Contact Us!

Latest blog posts

Everything is possible, you just have to want it – an interview with Klaudia Chmara, Delivery Manager

Written by: Justyna Cichocka (Recruitment and employer branding specialist)

Why Solwit?

Even before I decided to change jobs, I had the opportunity to work with Solwit in a joint project that I was running on the other end 😉 Mostly, because of the good memories I had after that collaboration, Solwit became a natural direction to start my search.

When I saw the ad, I felt like it had been posted especially for me 😊. The combination of liaison with the client, project management and working with a team is the perfect mix for me. Despite my technical background, I never imagined myself working as a programmer or designer, but my curiosity about new technologies and tools have remained.

On top of that, there were recommendations from my colleagues who were already working there at the time and the company’s size – big enough to feel stability and an organization on a good level, but at the same time small enough not to feel as another number in the huge corporation.

What is the key feature that makes Solwit stand out?

The first thing I noticed was the openness to new people and ideas. This openness manifests itself on many fronts, starting with a flexible approach to the needs of the client, through partner one from the Management Board to the employees, and ending with many ideas on how to integrate the entire Solwit team and make our lives and work easier.

For me, this is especially important because in my role as a manager I often introduce new methods or tools that not everyone may be convinced of right away. My biggest fear was joining teams that had been working together for a long time – I was afraid how my ideas would be received. I felt relieved when it turned out that everyone approached both me and them with great professionalism and enthusiasm.

What is the most difficult part of your daily work?

The first thought that springs to my mind: using two computers at the same time, especially in a home environment 😋 And in all seriousness, the biggest challenge is to be able to combine three worlds: client satisfaction, project team contentment, and making a profit.

Don’t get me wrong, these three things are not always in conflict with each other! But unfortunately, they require a lot of effort and care to keep that fragile balance intact.

What surprised you in Solwit?

Despite the fact that we have almost 400 people on board, there is still a friendly atmosphere among us. People know each other, the management is open towards employees and everyone is willing to help each other – also in between projects – if needed. I was probably most surprised when I entered the IT room and before I could even introduce myself I heard “Klaudia, right?. These are maybe small things, but it’s nice that a person is still treated as a human being and not another badge number and a problem they bring to the table.

What are you most proud of?

I think it’s the fact that I’m not afraid of challenges and I can draw ideas from every situation going forward. I like challenges, I like to feel that what I do professionally goes beyond my comfort zone and requires me to expand my knowledge base or expertise. The most rewarding is when I hear from clients or people I work with that my efforts had a real impact on the success of the project.

If you could change your position for a week, what would you like to do?

If we are talking about any position in the world, just for a week, I would definitely choose something related to animals – either a zoo keeper or a dog shelter caregiver.

If we’re talking about a more realistic scenario, I’d like to test myself at conducting training sessions. My future career is of course connected with the managerial path, but also with training others in management, communication and related methodologies. Although I am still learning and there is still a lot of work ahead of me, I am very happy that Solwit has an internal training system, where I can learn from my colleagues and maybe one day I will be able to pass on my knowledge and experience. 

Who is Solwit a perfect choice for? What kind of people feel good with us?

People who are open to new projects, challenges and other individuals. Those who want to feel part of a team. In fact, this is the place for everyone who wants to work, develop and is not afraid of planes flying overhead while working at the office 😀

The beginnings in Solwit, what do you remember, perhaps you would like to tell us a story?

At the very first visit to the office, when I signed the contract, my attention was drawn by the pleasant atmosphere I encountered as soon as I passed the door. Dorota and Mariola greeted me with a big smile, sweets in the kitchen and a welcome kit. I was one of the first who received a beautiful Solwit Kubota, so to this day there are people who have envied me) 😀 And as you know, the first impression can stay in your memory for a long time 😊

Join #solwitteam and be a part of the future! Apply now!

Latest blog posts

How we accelerated the withdrawals from STS Pay 3 times – architecture of the solution

Written by: Marketing Team

In April Google Cloud opened its new region in Poland. Solwit had the pleasure of presenting the results of its cooperation with two clients using Google solutions. One of them was STS Gaming Group. Its CTO, Wojciech Sznapka, talked to Michał Zieniewicz, Cloud Architect at Solwit. They talked about the details of implementation and integration of Google Cloud technologies, used to optimize and boost business processes in the company and benefit its clients. 

Do you prefer to watch the video? You will find the link at the end of the article. Interested in full case study? Read now!

sts_pay_solwit

Michał Zieniewicz, Solwit: Welcome everybody to our session, where we will tell you how to use Google services to build a competitive advantage for your business. My name is Michał Zieniewicz, I am Cloud Architect at Solwit. Here with me today is Wojtek Sznapka from STS, a popular sports betting company loved by all the winners. Hi Wojtek. 

Wojciech Sznapka, STS: Hi Michał. 

Michał, Solwit: I don’t know if you’ve heard, but Solwit is celebrating its 10th birthday this year.

Wojciech, STS: Oh, congratulations! 

Michał: Thanks! We have worked with Cloud since the beginning. For 10 years already some of us have walked in the clouds so that others could stand firm on the ground, developing and testing software for various business areas, such as artificial intelligence, image recognition, data warehouses and analytics, and integration and optimization of cloud environments. We have about 350 engineers on board, both testers and developers, many of them Google-certified “meteorologists”. I mean cloud specialists. Any celebrations at your company? 

Wojciech, STS: Not at the moment, but next year we are going to be 25 years old, so a huge party will be in order if there are no restrictions. 

Michał, Solwit: Could you tell us a few words on STS? 

Wojciech, STS: Sure. STS is the biggest sports betting company in Poland. We have nearly 50% share in the market. We have operated for 24 years and for eight years we have worked online. We offer live and pre-match bets., we cover e-sports, virtual sports, and so-called bet games, betting on card game results. That is what we do in Poland, but for two years already we have expanded abroad. Our STS Bet brand is present in Great Britain, under the UKGC license, and in the European countries allowing the MGA license. That’s STS in short. 

Michał, Solwit: Great. Now we know who we are, so you can tell us where the idea of the Payments API came from. 

Wojciech, STS: Sports betting is a fairly straightforward business. Participants put in some money, they bet, and either lose or win. They can pay the money out, or continue to operate with it. As you can see the financial transactions are a big part of it and they require a lot of control and software. Our technical department has been developing a system for payout automation, the STS Pay, for several years already. 

Before we started to work with you and Google the whole process of transferring the winnings to our customers’ accounts was done manually. After the packages were created, the transfers were authorized automatically by the SSIO system, or the package had to be checked by the staff. Later it was necessary to collect such a transfer package and “drop” it at the right bank. It took a lot of time and manual work. So we decided to use the cloud and integrate directly with banks using one API. 

Michał, Solwit: Could you share some details regarding the architecture? What did Payments API have to integrate with?

Wojciech, STS: Payments API integrates with 8 banks in Poland and provides a REST API for our STS Pay application. It is hosted by App Engine. We use App Engine Standard and App Engine Flex because the entire communication requires quite complicated cryptographic algorithms, which are not available, let’s say, out of the box. So, Flex proved necessary in this case. We also include Compute Engine in our stack and it requires a static IP. A very important thing is the integration with qualified signature card readers because every package needs to be authorized and signed cryptographically. 

Michał, Solwit: Is that why you decided to use the cloud solution? 

Wojciech, STS: There were a few factors. One was scaling. STS and the sports betting industry, in general, operate in a very uneven regime. During the Champions League, or when the national team is playing, or other interesting events are taking place, the traffic is quite big. In other periods it can be rather small. In this case, scaling was provided by App Engine at no additional cost. Security was also important, and Google offers many different levels of security. And of course, we don’t need to invest in infrastructure to start using the system, which we do not have to maintain. Everything is done for us. 

Michał, Solwit: Great. You have talked about the architecture and changes. I might add, the first concept for this solution assumed full automation and scalability of the package transfer, but it couldn’t be done due to legal regulations. The law requires that every package needs to be accepted by “a protein-based interface” – a human. This forced us to make some changes to the architecture and the process alike. Then, there appeared operators who accept the packages, an application integrating with the qualified signature readers, or other tools used by banks to authorize transfers. Static IP was another requirement of some banks – when connecting to the bank, only this specific IP is authorized. Compute Engine solved this problem. 

What was interesting, banks use signatures or certificates to connect to API which is totally fine and typical. But there was a problem with some banks because they didn’t allow more than one such certificate. It made testing much more difficult. Some banks don’t make a test API for integration available, so testing needs to be done in the production banking environment. We solved the problem with testing accounts which allowed us to perform such operations. It is interesting that some banks charge for every use of their API, so our bold assumption to check packages every couple of seconds generated quite a cost due to intensive traffic. We realized that only at the end of implementation, after the first deployments, and solving it was very simple – we changed the configuration not to check the status so often. It is still fine business-wise, and the costs of API usage have been reduced significantly. 

 Tell us, how did Payments API influence your business? 

Wojciech, STS: I can’t say a bad thing about it. And that’s how it was supposed to be. First of all, the time required to finalize payouts has been reduced from 20 minutes, which wasn’t a bad result anyway, as it was the fastest system in the market, to nearly 7 minutes. And that is from sending a transfer request to the moment the money appears on the bank account. 

Michał, Solwit: So, you are the only company in Poland that transfers the winnings so quickly. 

WS: Yes, that is very important for our client’s satisfaction. Imagine that you are in a pub with your friends, and you bet, the match ends, you win. Now you can simply send the payout request and in a matter of minutes you can use your money. 

It was our main motivation behind this solution. Secondly, we have simplified the payout procedure a lot. Before we did that people working at the finance department had been responsible for obtaining the packages, logging in to the proper bank, loading the packages, and signing them at the bank. With 8 banks there was a lot of logging in and out, The new solution allows the packages to be sent faster because it happens automatically. We have a unified process of signing the packages with qualified signatures which makes work easy and allows us to allocate the resources in different places. The most important thing is scalability. Many huge events are coming. There were the Champions League finals in November, now we’re waiting for UEFA Euro, which is always a busy period for every betting company, and especially for us. Google’s scaling simply works to our advantage. 

When it goes to API costs, now we are in much more control because in Google we can automatically filter Stack driver logs and put them to BigQuery for further analysis. BigQuery is our main data warehouse in STS, so everything is nicely put into a logical whole. It is very useful. The person responsible for payouts can simply enter BigQuery, see how many queries we generate, if we are nearing the limit, where the limits are, or if any payments are due. It helps a lot. 

Michał, Solwit: So, the fact that Google includes Payments API has given you a ready-made functionality nobody had even thought about before?  

Wojciech, STS: Exactly. It was surprising because the whole configuration took about 30 minutes, where manual log parsing, loading to the database, and querying would take several hours of development. Integration and all Google elements working together were of huge help to us. 

Michał, Solwit: It shows that using cloud technologies, especially Google Cloud, gives you more possibilities than you initially think. That is awesome, an awesome case. 

Wojciech, STS: Exactly. 

Michał, Solwit: Thank you all for your attention. I hope this presentation will inspire some of you to use Google services to reach your own business goals. If you have any questions or comments, just get in touch with us. And thank you Wojtek for your time, see you later.   

Wojciech, STS: Thanks!

Do you want to boost your business using cloud, just like STS? Drop us a line!

Latest blog posts

What does programming in a safety-critical project teach? The developer’s point of view

Written by: Łukasz Sojka (Designer – Programmer)
What have I learnt by developing a safety-critical project? As a developer with over five years of experience I have learnt a lot and will try to answer this question for you. We wrote about safety-critical software some time ago and gave you some pointers on how to do it. You can find the article here.
embedded_programmer

The process of developing safety-critical software

In short, safety-critical software is subject to several standards describing how it should be developed. The entire production process must be carefully studied, documented from the general, like the basic system requirements, to the specific, that is the exact design of the implementation. For each stage of project documentation, a parallel verification plan must be created.

The software implementation stage is treated here as one of many elements of the process. Of course, standards play an important role at this stage as well. Code writing methods, rules, standards are imposed – e.g. MISRA C.

Project documentation

So, what does the developer’s work look like in such a project? Do all these documents and rules help or disturb the developer? Let’s start with the project documentation. It is beyond question that it should always be used in software development. However, in safety-critical projects there is often much more documentation, and it is usually more detailed. In addition to the basic description of the division into blocks, classes, writing out interfaces or data structures, we need to deal with implementation details. The software implementation plan of safety-critical software needs to describe its behavior under all circumstances, even the unlikely ones. There is no room for understatement or creativity here.

At first glance, this may seem like a huge limitation but it simply is not. Indeed, sometimes the rigid form of implementation can be problematic. In some situations, it would be much easier for a developer to pass an argument to a given function in a different way, but this would require changes in the software implementation design, and perhaps even the architecture. Altering these documents means changing the plans for their verification and most likely impacts the test environment.

However, these are very rare situations, which occur even less if the earlier stages of the project are well thought out. If they occur, they force the developer to take a broader look at the impact of possible changes on the entire project.   

And what about the benefits of such detailed project documentation? The main benefit is the sole process of its creation – the entire team of designers, software architects, often also developers and testers have to carefully consider every detail and every situation that may happen, they carefully plan out all the components and data exchange mechanisms.  

Implementation and detection of errors

So, when it comes to implementation, the lack of room for the developer’s invention is not a limitation. It is an assurance that what the developer creates will work as expected, and the risk of errors is as low as possible from the very beginning.  

Personally, it gives me great satisfaction when the software I write words correctly right from the first launch and I do not have to fix dozens of bugs resulting from misunderstandings and lacking guidelines.  

Of course, it doesn’t mean bugs never happen in such software. As they say, who makes no mistakes never makes anything. So, despite the preparation of even the most detailed project documentation. In some situations, our code doesn’t behave as expected. It is important to detect such cases as soon as possible.    

Therefore, in safety-critical projects, every development stage is accompanied by a software testing stage. Starting from unit tests that check individual functions, through the component tests, up to the functional tests of the entire product. Again, from the developer’s point of view, it might seem that such meticulous software testing is redundant and only creates unnecessary work, because “I have run the program myself, and it works for me, and these testers come up with some unlikely cases that will never happen.” Again, it couldn’t be further from the truth.

Software testing

Working in a safety-critical project has often shown me how much software tests are needed and how complex errors they sometimes find. The manual program launch, which is often the only test in many non-safety projects, does not check the time dependencies. It does not show whether, for example, some initial state is not unstable under specific conditions.  

However, all of that is very clearly shown by well-planned tests that can enforce unlikely, but not impossible, conditions. The correction of errors detected via such tests is not bothersome for the developer, because a well-described test describes in detail the conditions for reproducing each bug, often making the process significantly faster and easier.

Error detection

But how come these errors appear if we have such a precisely described design? Apart from the possibility of implementation being inconsistent with the original assumptions, the wrong way of writing the code is one of the major causes of defects.  

Let’s talk about the software implementation itself. As I have already mentioned, the standards impose considerable requirements on safety-critical projects. Compliance with the rules of the MISRA C standard, the use of static analysis tools, ensuring code readability, and conducting code review are just a few examples. Once again, it might seem unnecessary and detrimental to the implementation deadlines. Especially, the MISRA rules may seem incomprehensible and useless.  

For example, consider the strict requirement to cast variables of different types explicitly before an arithmetic operation. After all, everyone knows the compiler can handle it on its own and select the appropriate type for the variable. But what if during such an implicit cast we lose important information due to a rounding error?  

Applying such a rule forces developers to consider whether their results are going to be correct and helps them understand how the code they write is going to be interpreted by the compiler.  

Code readability and review

Ensuring code readability and conducting reviews are also extremely important. Sure, a program written in one string with variables like ‘a’, ‘b’, ‘c’ will probably work. But it will often cause huge problems when modified not only by other team members, but even by the original author. We can avoid these problems by following clearly defined rules that describe how to divide code into sub-functions, how to name variables, etc. Even when it seems that our code is easy to read and understand, verification by another team member during the review often shows that there is still room for much improvement.  

Summary

So, what has developing software in a safety-critical project taught me? Thanks to using the MISRA rules and analyzing my own mistakes, I have certainly improved my programming skills. I have learned to look more broadly at the goals I want to achieve and the predicted results of my work. I also consider the impact my work has on the whole project. I have also found out that the standards are not as scary as they may seem and meticulously planned processes, documentations, and detailed tests should be all integral parts of all projects. And I mean all projects, not only the safety-critical ones. It all makes developing a really pleasant experience, and not at all boring and very formal, as some might tell you.

Latest blog posts

Remote Recruitment: tips & tricks

Written by: Justyna Cichocka (Employer Branding Specialist)

How does remote recruitment differ from face to face recruiting? All of us could probably list a number of aspects, but the most important is the human one. Body language plays a lesser role, it is easier to hide some imperfections or stress, but the common factor of remote and face to face recruitment is that during both of them we have to present ourselves to the best of our abilities. How to do it? It is not as difficult as it may seem.

1. Secure the area and identify possible threats

It is not at all coincidental that I use military-related vocabulary.  Conducting a recruitment interview from home can sometimes be a battle with the world around us at the given time, so it makes sense to anticipate certain moves. Make sure you secure the kids and pets, prepare a neutral area (keep in mind what is behind you!), dress in something other than your PJs, and voila, the first stage is done! All that’s left to do is to charge your laptop, check your camera, mute your phone and find your headsets.

2. Prepare something to keep your hands busy

I suggest a glass of water – useful and neutral at the same time, if you suddenly feel the urge to look away, sway in your chair or if simply the stress takes over – that will be your life-saver.

3. Remember that on the other side there are humans too

This should really be the first and most important point of this list. We assume that during the interview it is the candidate who is questioned and the one who is supposed to make the best impression. Wrong! Both sides care just the same – the candidate in order to get the job, and the recruiter to get a new employee. And yes, we get stressed too. We have better and worse days, more or less favourable conditions, and we also want to make the best impression on you – in a professional and friendly manner. Even though we don’t see each other in person, shake hands or show you around the office, this conversation is just as important.

4. Prepare yourself

Some people like spontaneity. I respect that a lot, but a recruitment interview is a pretty unfortunate time to practice spontaneity. You have the right to demand from a recruiter that they know who they meet with, study your resume, and familiarize themselves with your expectations – it’s a two-way street, a recruiter wants to know a few things as well. These are not very demanding points, but let’s mention a few: what position you applied for, what made you apply, whether you had heard about Solwit before, what you read or found out about us. If you want to know what the process looks like internally – take a look here – you will find there more specific information on our recruitment path. It is worth recalling the content of the ad which you applied for – there are often hidden hints as to what would the subject of the technical part of the meeting be. You will get some extra points for familiarizing yourself with our website or social media profiles. Prepare questions to the manager or recruiter – it is important that after the meeting you have a full overview of the situation and in case we call you with an offer of cooperation 😉

5. Think about what you want

By that I mean the position and salary you expect. If you know that you don’t want to develop in a certain direction – say so. If you’re not into frontend, but in your previous job you had to do it – let us know. The point is that after the meeting we know what we can offer you – without harming you, your health or our time. When it comes to salary, think about it beforehand. Calculate absolute minimum you need – we all have commitments and bills to pay. Talk about it openly. Also, think about a potential form of agreement – programmers or testers often consider B2B contracts. Read about it, ask your friends if you haven’t had a chance to work on such terms.

6. Be yourself!

It’s a pretty worn out phrase, but there’s no better way to say what I mean. We want to get to know you – if you don’t know something, tell us, if you want to ask something, ask us, and if you’re stressed and it is stress that is taking over your skills – let us know. You don’t have to know everything, if there is something you don’t know, that’s ok too.

Off the record

This is a bit off topic, but also very important: we always provide feedback, but see point 3. If you don’t hear from us within two weeks, let us know – sometimes things may slip our minds during the hectic day-to-day activities.

So now you know how the recruitment process works. Maybe it is time to give it a try? You can find the latest job offers here.

Latest blog posts

Shelf-Inspection AI: effective product exposition

Written by: Solwit’s Marketing Team

As the studies show, the way goods are placed on store shelves impacts the effectiveness of their sales and improves the shopping experience. Sales managers are often required to follow product display guidelines of certain brands.

Shelf Inspection AI is a system that automates the real-time planogram verification process and makes it more efficient. It also offers various additional benefits, and as a result, allows to minimize the risk of financial losses resulting from incorrect exposure. Read and find out if this solution works for your business!

The importance of merchandising

In-store Merchandising is a set of activities intended to influence the consumer’s behavior by optimizing item exposition, in accordance with the prepared planograms. 

Why is the planogram so important? Firstly, following its guidelines we avoid upsetting the suppliers. Secondly, proper exposure translates into better sales results. 

In order to achieve high merchandising effectiveness, we need to use displays, place promotional products in key locations on the shelves, expose the right quantities of products or include additional marketing materials.

Solution

One of the most important causes of low sales in brick-and-mortar stores is ineffective merchandising, often inconsistent with the guidelines. So, there is a way to significantly improve the indicators. At Solwit, we have created Shelf Inspection AI – software for retail that allows verifying the exposition of all product categories placed on the shelves.

How does it work?

The employee responsible for merchandising at the outlet receives a request from the headquarters with a planogram. They take a photo of the shelf using the app. The system uses artificial intelligence (AI) algorithms,  to analyze the content of the photo and informs immediately if the products have been placed correctly. AI recognizes the type and number of products, but also other elements in the photo, and compares them with the planogram with high effectiveness. 

The application allows to: 

  • verify the arrangement of products in accordance with the planogram, 
  • monitor multiple goods on shelves and check their order, 
  • control proper labelling of promotional campaigns, 
  • generate extensive and transparent real-time reports, which can be used internally and for settlements with partners.

Cloud solutions allow for real-time merchandising analysis. In addition, they let you share the data with product suppliers on an ongoing basis. You can also archive reports and photos of the exposition.

Benefits of implementing Shelf Inspection AI

Supporting merchandising with the Shelf Inspection AI app brings tangible benefits for the entire company.  

The software provides: 

  • 99,5% correctly identified products on the photo,
  • duration of the exposure analysis process of less than 60 seconds,
  • immediate exposition verification, 
  • report preparation, 
  • lowered cost of merchandising analysis, 
  • high AI efficiency, 
  • flexibility of the system that learns new products.

Stages of implementing the application in the company

1. Profitability analysis

At the very beginning, our experts conduct workshops and analyze the current exposure verification process. During such a meeting, they also indicate areas for improvement. 

2. Solution proposition

In the next stage, we propose a solution tailored to your retail chain. We show you the benefits that you will achieve by implementing the automated product exposition verification system. 

3. Proof of Concept

The next step is to train the neural network model based on product exposure. This stage will be adapted to the verification requirements of a specific planogram. After developing the model, we carry out tests to check the system’s effectiveness. 

4. Implementation and support

At Solwit, we care not only about effective implementation of the solution in your company, but also about ensuring that it maximizes business benefits. Our experts support you in the process of software implementation, and then help you utilize its maximum potential to automate processes in your commercial network.

Examples of Shelf Inspection AI usage

  • Tracking marketing campaigns – professional merchandising, attention to exposition and real-time reports allow you to meet contractual exposition obligations. 
  • Price control – you will make sure that the store has the current prices and price differentiators of your products in place. 
  • Planogram compliance verification – with Shelf Inspection AI you can check: the quantity and sequence of products, the number of visible “faces” of the product, compliance with the planogram, product categories, price and price differentiators.
  • Shelf share analysis – you can easily check the share of your items in the entire exposition and optimize merchandising in the retail network.

In order to improve sales results in your retail chain, optimize your merchandising processes now.

Automating the process of exposition verification in retail stores means improving and accelerating the control process. Most importantly, it leads to an increase in sales rates through proper presentation of articles on store shelves. Proper presentation means compliance with the requirements of the brand manager in terms of the number of products, their arrangement, as well as proper exposition or labelling promotion. 

You don’t have to wait long for the results of using the Shelf Inspection AI app. It has a positive impact on both the sales volume of a given store and business relationships with partners whose brands are sold there. Do not hesitate and contact us. You will receive a solution tailored to your needs, to not only protect you from losses resulting from improper product exposition, but also increase your sales results.

If you want to learn more about Shelf Inspection AI or book a consultation with our experts, click here – Shelf Inspection AI – Solwit 

Latest blog posts

In terms of numbers, the company is already a corporation, yet the kitchen still has a family feel – interview with Mikołaj Andrzejewski, Embedded Developer

Written by: Justyna Cichocka (Recruitment and employer branding specialist)

Why Solwit?

When I was changing jobs, I was looking for a place where I could primarily pursue my passion – programming. Previous experiences had taught me that the larger the corporation, the more time is spent in meetings, and this in turn builds a sense of wasted time. I wanted to have a real impact on the product and development – this seems to be the domain of smaller companies. I consider Solwit to be a golden mean – in my project there are not many meetings, the bar is set high, but on the other hand there is no need to work late all the time to make up for underestimated tasks. In Solwit we put emphasis on development, organizing internal trainings and financing the ones held in other places. Why ‘us”? Because I had the pleasure to lead one of such internal trainings, so I feel part of it 😉

What is the key feature that makes Solwit stand out?

I think the main feature is the friendly and open environment that we have managed to build and maintain, despite the quite large number of co-workers. In terms of numbers, the company is already a corporation, yet you can still feel the family atmosphere in the kitchen.

What is the most difficult part of your daily work?

Limiting the number of coffees drunk 😊 Unfortunately (or fortunately) there are always many interesting topics to talk about with equally interesting people, it’s just a pity that time in Solwit passes so quickly and the job will not do itself…

What surprised you in Solwit?

The first surprise – the efficiency of the IT team. Any hardware failure does not have to be associated with putting work on hold for a few days, and most issues can still be dealt with immediately! The second thing is the performance of the recruitment team. I have fond recollections when starting my cooperation – after a few successful interviews I was really looking forward to coming to the office 😊

What are you most proud of?

If I started a family, that would be my reason to be most proud of 😊 Unfortunately I haven’t gotten that far in life, so at this point my biggest pride is the fact that everything I’ve achieved in IT I owe mostly to myself. During my school days, I spent nights and nights trying to write a working program, I spent a lot of time in my little electronic lab, and despite various problems at school, at this point I have nothing to be ashamed of. And I hope that my mother will forgive me all the ‘F’ grades I got at school 😊

If you could change your position for a week, what would you like to do?

If it’s on paper – I’d love to take over as CEO, and it could be even longer than for a week 😉 If, on the other hand, in terms of responsibilities – I don’t think I’d want to change anything. I like my job – sometimes it’s even silly to call it that – designing and programming are still my passion as well.

Who is Solwit a perfect choice for? What kind of people feel good with us?

I think there are several types of people:

a) for beginners – this is definitely a good place to grow,

b) for those who are bored with work in big corporations – here you can rest your headphones and vocal cords,

c) for people with ambitions – training programs and substantive support of senior colleagues give a huge kick and motivate to explore the unknown.

The beginnings in Solwit, what do you remember, perhaps you would like to tell us a story?

There is this one story. On my euphoric commute to work, I took a wrong turn and caused a collision. The bumper of my car was damaged. I can still remember how, despite a short acquaintance (second week at work), a teammate himself suggested that we could meet at the weekend and fix it. He invited me to his place and we spent several hours in the garage, replacing the cracked part. The car has been running until now, and over time I have become convinced that this is not an isolated case of selfless support in Solwit. By the way – greetings to you Maciek 😊

Where do you see Solwit in 10 years?

Well, seeing the rate of growth, I would very much like to see Solwit in similar colours as today. It is true that there is a visible trend of remote working, but I would still like to see the traditional team building meetings – both quarterly and daily – maintained.

What can we wish to the entire Solwit team on its 10th birthday?

To stay on course and stick to the plan 😊

Latest blog posts