Refactoring and Reengineering: Why Software Maintenance Is Important
Mar 30, 2021
Keeping up with the competition and making consumers satisfied with your digital product in 2021 requires maintaining its program component at the highest level of quality standards.
One of the ways of achieving this is by frequently revising and improving your code. In this regard, there are two main approaches to keeping the software tight – refactoring and reengineering.
Today, let’s take a closer look at these two practices, explore their differences and discover the main benefits updating your software has for the business.
As usually, starting with the definitions first…
What is software refactoring and reengineering?
Essentially, both refactoring and reengineering aim at rewriting existing computer code to improve the internal design, structure, and/or implementation of the software, while preserving its external functionality.
Refactoring describes changing bits and pieces of software while keeping the core of a program mostly untouched. Meanwhile, reengineering implies making fundamental changes to a program’s structure and design – whether adapting the software to a new hardware platform, changing its programming language, or shifting it to a new dialect.
Like that, there are three main types of software reengineering practices. Those are:
Porting – when a program is adjusted to operate on different hardware.
Translation – when the code is translated from an old (legacy) language to a new (modern) one.
and Migration – when the code is shifted to a new dialect of a language without changing its intrinsic nature.
In essence, the two practices in question often overlap as all reengineering consists of a series of refactoring initiatives.
To provide a real-life example for reference, let’s compare it with fixing an old car. One way is to only fix the parts that have worn out, so it represents virtually the same car inside and out, with just some fresh details. That would be refactoring. Or, you can strip the car down and replace its engine, suspension, transmission, etc., or even make it gas-powered. That’s reengineering.
You choose the best option depending on your setup, resources, and needs. At that, it should be noted that just like with the car, reengineering will often take a lot more time.
Now that we’ve got it clear about the nature of software redesign, let’s move on to its key advantages from a business perspective.
Benefits of software refactoring and reengineering
Dealing with the legacy code is perhaps the developers’ second least favorite activity (right after jumping off a bridge). It is also the most common cause of spilling coffee over the keyboard in rage.
So, why do it in the first place?
It turns out there are three rather convincing reasons for a business to consider rewriting their product’s code. Let’s look into each in particular:
The first thing to think about when considering any business initiative is its monetary value. Although rebuilding software implies immediate spendings, the long-term return on investments from updating a digital product will cover it tenfold in the majority of cases.
Just within the US, various software failures have cost businesses a whopping $1.7T back in 2017 with the figure growing by over 60% in a year. That’s definitely one piece of statistics to consider if your company’s budget is anywhere among the main concerns.
And even if there are no code-related issues affecting your business directly right now, a proactive approach will definitely save a good buck on resolving those issues in the future. It’s well known that it is a lot easier to prevent sickness than to cure it. So, redesigning a website or an app is definitely a forward-thinking decision to make.
IT technology does not stand still, and applying new software development methods or polishing the existing code will always have a positive effect on its performance. So, quality is another major point to factor in when considering adjusting or rebuilding a digital product.
How old is your platform? Perhaps it’s worthwhile to shift the servers to the cloud for better flexibility or to scale up. Maybe it’s time to change a part of your tech stack for newer technology. Or quite possibly, it’s a good idea to rewrite the entire program in a different coding language.
Whatever it is, properly redesigning your software will improve how it works, looks, and feels, which the end-users will definitely enjoy and value big time.
Safety and Security
The older a piece of software gets, the more likely it is to suffer troubles operating within a modern environment. From integration and maintenance to update – it surely won’t hurt to keep up with the latest tech trends and ensure that nothing goes wrong when least expected.
In fact, refactoring bits and pieces of your code may help you to avoid the need of re-engineering the entire product as time passes, which may be a lot more costly and time-consuming.
Gradually modifying software also reduces the risks of losing valuable business data or stalling the product, so you can update a website or an app without worries in a safe and secure way.
When to consider product update
Now, all of the above sounds promising and great. But how does one tell if a digital product is in urgent need of an update or revision?
In this regard, there are a few red flags to consider, too.
The first and foremost hint suggesting you need to consider refactoring or reengineering comes after answering the following question:
Does the product you’re using fulfill all of the functions it was designed to and are the users fully satisfied with its functionality?
If everything is up and running and users are happy, then there is probably no need to bother with software redesign, at least no urgent need. You may still consider the long-term benefits of updating the software, and ask your IT team to scan the product for potential bottlenecks and issues. But bear in mind the “don’t fix what isn’t broken” philosophy and try to focus on the urgent and essential.
Is your app or website working quickly and smoothly enough?
If so, good news! But if it’s not, you don’t want to waste your users’ time keeping the outdated products around and “beating the old horse”. Invest some of your time and resources into reengineering or refactor the bottlenecks of the software, and you’ll see that you return a lot more once the users notice improvements.
If bugs and errors are popping up quicker than your team is able to fix them, it is definitely a sign to look at the code structure behind your digital product and change it.
In fact, nothing spoils the user experience and perception of a business bigger than product bugs, so it’s better not to ignore the issue if there are any, even minor ones.
Last but not least, you may want to consider rebuilding your software when shifting to new hardware or software environments.
As already mentioned, maybe it’s time to move to the cloud or shift your product to a new coding language altogether. Talk to your IT team and discuss the need to change the basics and potential benefits.
All in all, redesigning your app or website is an important topic to look at frequently, with a lot of potential in it.
Whether refactoring a part of the code or reengineering the entire product, it is worthwhile to think everything through, compare the pros and cons, and evaluate potential issues. If done properly and at the right time, updating your software will definitely be a change for the better for both you and your service consumers.
Want to learn more about software restructuring? Feel free to contact us and we’ll answer all your questions!
Digital Transformation In The Education Industry: E-Learning Revolution
Nov 6, 2020
Just like many other traditional industries, education has been shifting to a digital-first dimension for quite a while. Seemingly approaching its pinnacle since COVID, digital transformation in the education industry is only picking up speed right now, and we can watch an actual e-learning revolution happen right in front of our eyes.
How big is the progress so far and what opportunities are there in the e-learning niche? Let’s find it out.
This year’s contact limitations didn’t allow many options for schools but to rethink the standard model of knowledge distribution and adapt to the challenges of the new world. Remote schooling has become the new norm on all levels – from early education and tutoring to higher education, professional growth courses, and non-academic training.
Naturally, such a great demand requires a lot more than an outstanding supply of hardware. This is less of a problem since most households and learning facilities in the developed countries have access to a wide range of tech. The real challenge is to develop appropriate software tools allowing schools and students to fulfill regular activities without sacrificing education quality.
The e-learning revolution
Speaking about the types of digital solutions helping students learn at distance – the list is virtually bottomless.
Right now we’re seeing major efforts at improving web classes, automating student and work assessment, and digitalizing course materials via smart textbooks, rich video content, slide show presentations, and much more. In this regard, Kognity and Lix Technologies are two startups leading the race at the moment.
Secondly, improving general education accessibility and support within the sector, especially for college and university students, is another major focus point right now, with Graduway and Teacherly delivering some great results already.
Various administrative tools for conducting the educational process online are also being developed lately. They include digitalized school payments and fee collection software, student attendance tracking, and education monitoring applications. And don’t forget about language adaptation and designing a new learning model for students with limited abilities.
Meanwhile, the task of digitalizing is rather simplified for adult and post-degree learning – where online courses have already been adopted well before the pandemic.
All this being the case, digital learning remains unsaturated product-wise, as the demand for niche solutions exceeds the current market supply.
Edtech investments and initiatives
Given the timing of COVID and further uncertainty about quarantines and travel restrictions – entrepreneurs see a high time to put both their time and resources into the development of edtech businesses and startups.
Right now, the UK remains the largest hub when it comes to digital transformation in the education industry, receiving almost 40% of Europe’s investments in the sector.
Speaking of which…
While businesses are the main drivers of progress in edTech adoption, governments also play a part in the process. In this regard, Estonia’s legislators show a good example by not only supporting internal edtech market but freely sharing the developed tools and technologies with the rest of the world.
Notably, a lot of digital education initiatives are occurring in third-world countries, where the pandemic has hit the already crisis-affected regions.
Like that, international humanitarian organizations and private foundations are backing versatile tech solutions in cooperation with ministries of education. Right now they are helping students and families in places like Sudan, Uganda, Lebanon, Jordan, Chad, Bangladesh, as well as Kenya, and multiple African countries access affordable education and continue learning.
The global education system is undergoing a major transformation on all levels. Both schools and universities, as well as non-academic educational institutions, are switching to a digital-first approach. This means an increased demand for niche software development and technology maintenance.
Online learning is becoming the new norm, and the trend does not seem to be reversing anytime soon. Therefore, investing in custom e-learning solutions is decisively a winning business move right now.
Digital Transformation In Real Estate: Trends And IT Solutions
Oct 7, 2020
Unlike many other industries, real estate is booming in 2020/2021 and so is digital transformation within the industry.
As the pandemic hit businesses of all types, immovables seem to be the bullet-proof asset everyone is looking for in turbulent times. The real estate market is on fire, but the competition is fierce, too, so poor acquisition choices or property mismanagement can be financially fatal.
Now, what can resolve such issues better than software?
Real estate software development and IT services
Technology has helped to overcome challenges in almost every sphere of life, and real estate management is not an exception. More so, custom software has already proven to be a golden goose for the industry.
Here’s who can benefit from custom property management solutions:
Real estate agencies
Private and corporate facility owners
Landlords and tenants
Now, let’s look into what we can offer to each of these groups.
1. Real estate agents and agencies
Before anyone else, it’s real estate agencies and individual realtors who can benefit from custom digital products. Whether it’s a single-page business card or a fully-fledged digital platform with CRMs and admin dashboards – building up a strong online presence is the right move.
Here are some of the products and services businesses usually look for:
Web and mobile apps for realtors and agencies
Customer relationship management (CRM) and marketing tools
Facility search and listings
Multiple Listing Service (MLS) and IDX integrations
Accounting and document turnover
2. Private and corporate facility owners
Managing property is not an easy task both in the private and corporate sectors. Especially owning multiple homes or offices, apartment buildings, warehouse spaces, etc – things get pretty messy without effective management.
For businesses with many locations (like food chains, hotels & resorts, fitness clubs, multi-office companies) – having the following may be a lifesaver:
Utility billing and energy management tools
Accounting and financial integration
Document turnover software
Customized reporting and data analytics
Maintenance work oversight
3. Landlords and tenants
Selling and buying property is just the tip of the Real Estate iceberg.
Another big chunk of the industry is renting & lease, which is even more intense and than trade.
Landlords and tenants know it’s not easy to find a good place to stay at – on one side, and a reliable person or business to trust your property to – on the other.
Here’s how landlords and tenants can benefit from custom real estate software development:
Applicants screening tools and lease management
Accounting and document management
Online payment functionalities
Reliable communication channels
Maintenance work oversight
4. Real estate startups
Given the state of the real estate industry, it does not take a college degree to see that it’s high time to occupy a niche in the industry. RE aggregators are gaining momentum but are usually limited to a particular city or area. Therefore, there’s plenty of opportunities to leverage the current market rise and find your spot under the sun while helping others find theirs.
So, startups and individual entrepreneurs are another large group of clients looking for digital products in the real estate industry. Here are some of the popular options:
Custom web and mobile apps
Property management software
Accounting and financial integration
Real estate data analytics
Disruptive innovation in real estate
Apart from everything mentioned above, it is possible to develop or integrate an AI-powered solution predicting property price fluctuations and investment potential (similar to one of our fintech projects – OxfordRisk).
The algorithm works based on evaluating key facility metrics like location and area, property type, and previously recorded trade prices. It then compares it against the market’s big data, estimating the property’s value over time.
Not only does it help to understand a property’s actual price and investment potential, but also to predict the best deal moment.
Real estate is booming worldwide.
Whether you own property, manage facilities, invest in real estate, or help others do the above-mentioned – there are great challenges and opportunities on the way.
Successfully meeting the first and seizing the second requires proactive action and leveraging the power of technology.
Want to learn more about digital opportunities in real estate right now? Contact us to ask any questions!
MintyMint’s Software Development Workflow: Best Practices
Sep 24, 2020
Every IT team has its own ways when it comes to producing software. And while some of the main dev tools and techniques are common across the industry, it’s the details that make all the difference.
So, it wouldn’t hurt to explore MintyMint’s software development workflow before you choose us as your outsourcing partner.
Business cooperation models
When it comes to the software development workflow, we must define the three main types of cooperation we offer:
a. Dedicated software team
Need a ready team of experts to work on a project under your supervision?
Hire a dedicated team!
In this case, you outline project goals and we provide a team of skilled devs working on it full-time. You may determine your role in the development process – from direct team management (set tasks and deadlines) to brief supervision of the key milestones. This gives you the right amount of leverage over the dev process while taking all the executive headaches away, without compromising end-product quality.
This option will make it for startups and SMBs with a tech-savvy management and a clear product vision, but no tech team.
The key benefits of hiring a dedicated team with MintyMint are the following:
Predictable and fixed budget;
Full-time access to dedicated experts focused on your product and business goals;
PM and QA engineer services free of charge.
b. Resource staffing services
Moving on to resource staffing (aka team extension). This type of partnership is great for scaleups and existing teams that need additional resources to fulfill a temporary task or take on a particular responsibility within the project.
Externalizing workloads is not just a good move budget-wise. It also alleviates the pressure from your core team (so they can focus on high-priority tasks and challenges) and lifts the burdens of the recruitment process and employee management, which are on us.
Like that, we provide IT experts and take care of HR management and paperwork. Meanwhile, you can focus on essential project tasks and processes.
Here are the main benefits of backing up your team with MintyMint experts:
Full control over team member management;
Always available specialists to fulfill urgent tasks;
Easy recruitment, HR, and paperwork.
c. Turnkey project development
Last but not least, you can always order a complete, turnkey project. For startups and entrepreneurs without extensive IT product building experience, as well as a wide range of SMBs – this is probably the best bet.
In general, it’s a go-to choice for anyone who doesn’t want to deal with process management but does need tangible results.
When ordering a turnkey project, you provide a general product vision and give continuous feedback along the production process. Meanwhile, our team returns a fully tested and release-ready product within the agreed timeframe. Like this, you get a full-cycle project development service where you only pay for results.
The main advantages of ordering a turnkey project from MintyMint are:
You pay for a ready product (fixed price);
We take care of production and associated risks;
Less management as compared to other cooperation models.
IT project development stages
When it comes to the very IT development workflow, our typical project consists of three main stages.
1. Discovery and Planning
This is the first stage in any project. Here we transform a client’s idea into a product concept with a clear design and measurable characteristics.
The discovery and planning stage is there to:
Set the main business goals and needs;
Investigate potential problems, risks, and solutions;
Determine the product’s key user value;
Develop the client image;
Outline the main product features and details;
Create a value chain map with modules, streams, and use cases.
All of this allows us to make a project estimate with a detailed roadmap that includes the scope and time of work, key milestones, and deadlines.
2. Development and Implementation
After a project’s roadmap is outlined, the actual “development” begins.
Our design team creates a UX/UI brand book with elaborate visual materials and nuanced user experiences based on market research and trends.
Meanwhile, our coding department implements the actual back-end and front-end functionalities of the product.
This stage allows to:
Develop the product’s prototype;
Create a holistic brand image;
Develop the program functionalities of the product;
Create and support the database;
Conduct initial product testing.
3. Releasing and Maintenance
Once the product is developed and tested, it is ready for release and subsequent maintenance.
At this point, our team prepares technical documentation and transfers the product from the development environment to the public internet.
This stage includes:
Preparation for the release;
Publishing the product on public sites;
Subsequent product maintenance and monitoring;
Frankly speaking, it is the end of the project’s lifecycle. That said, this stage is a continuous process and our experts remain in touch to resolve any technical issues.
Software development team management
Our project manager is always there to help you at every stage of the production journey.
We know that any project or task has an individual set of requirements in terms of daily team collaboration, planning, and execution – which requires an appropriate management approach to ensure smooth software development.
At MintyMint, we have found the following management practices to be the best for us and our clients:
A linear management methodology, it implies a detailed project roadmap with consecutive work stages that follow one another in a strict order.
Clear end-product vision;
Defined scope of work;
Continuous development process;
Strict project timeframe.
Scrum is an agile project management approach. It implies dividing the work into iterative stages called “sprints”, where the progress is constantly updated and reevaluated at online meetings.
Scrum is best used for:
Big and complex projects;
Increased client engagement;
Gradual development process;
Continuous progress review.
A lean management model that revolves around balancing project priorities and team capacities. Kanban offers visualized process oversight via the Kanban board and allows to avoid bottlenecks and reduce both technical debt and WIP (work in progress).
Kanban is good for:
Flexible development process;
Custom software development process
Depending on the client’s needs and setup, it is possible to combine different project management practices for greater productivity and customer satisfaction.
Technology and expertise
As for the technology in the arsenal of our IT experts – there are really no boundaries. Our software specialists’ combined experience and skillsets are versatile enough to adjust to every client’s specific needs.
Some of the technologies we often work with include:
Now that you know every nuance of our team’s software development workflow, you can see whether it fits in with your current goals and needs.
Naturally, every project requires a unique approach. So, if you have any questions regarding how MintyMint can help you fulfill your IT goals – feel free to reach out!
The Rise of 5G: A New Network Standard
Sep 8, 2020
5G towers are rising on every corner of the cities around the world like McDonald’s restaurants in the 90s. And while the revolutionary network is shrouded by conspiracy on one hand and excitement on the other, the facts speak for themselves: 5G is not so much an inevitable future anymore, but rather a reality arrived at the doorstep.
What does this mean for the average consumer? How does the new technology standard affect the advancement of the Internet of Things? And most importantly what is its role in global digitization? Let’s find out.
First things first
Before anything else, what exactly is 5G?
5G stands for fifth-generation wireless cellular connection, which network companies began widely deploying as recently as last year. It is the successor of the well-known and widely applied 4G, which dominates the Internet consumer market today.
Technically speaking, 5G utilizes radio waves to transmit data – just like other wireless technologies. What makes it distinct from the competition, is a higher frequency of the radio signal that 5G operates on (up to 2.7-39Ghz, against 0.7-2.7Ghz for 4G). Sound clear: the higher the frequency the higher the data bandwidth. That said, it’s not all that simple in reality.
Here’s how network speed has changed over time:
The physical consequence of using high frequencies is a shortened wavelength that inevitably comes along. And a shorter wavelength means a smaller distance the signal can travel without quality loss. And when it comes to creating an Internet network, this means that 5G requires a far more dense cellular geography to achieve even coverage. In simple words, it requires installing a lot more cellular towers per area to effectively penetrate buildings and landscape irregularities in order to reach the consumer.
When fully set up, 5G can support up to a million devices per sq kilometer (as opposed to just a 100,000 for 4G) and offers a whopping broadband speed measured in Gigabits per second – similarly to the cable internet capabilities.
So, what exactly do we need 5G for? Does the opportunity of watching 4k online really drive all of the hype around the new network standard alone?
Well, the main reason behind the introduction of 5G is indeed the increased download speed it offers. However, there are a few equally important things that come out of achieving faster web access.
First of all, apart from your evening movie-watching routine, 5G is highly beneficial to the development of the IoT. By increasing connection speed and coverage, as well as the number of supported devices per area it allows surpassing the present limits to establishing a mesh of connected devices.
Internet of Things is among the fastest growing and most important layers of the new digital realm that we are entering nowadays. So, it’s quite hard to overestimate the importance of facilitating IoT with a radically better type of web access. Btw, you may read more about IoT and it’s part in creating the future today, in our previous article.
Secondly, and this is far less obvious, 5G has the potential to replace local internet service points (like cable internet and wi-fi) with a single, universally applicable technology. Yes, you heard that right. A universal wi-fi will be here before the universal basic income (who’s to judge what is best?).
Check out Ookla’s up-to-date map of 5G rollouts in cities around the world.
Indeed, why bothering with installing cable internet (which requires digging / boring / drilling) and extend it via countless wi-fi routers when you can have a wi-fi cover the entire globe?
It sounds perfect!
Yet, there are actually some concerns…
However great the technology is, it’s a lot like a troubled teen in a household.
Your neighbors don’t like it.
Ever since the introduction of 5G, rumors, disbelief, and even fear and hate just won’t let it go. In this regard, it is (rather widely) believed that the increased density of radiofrequency electromagnetic fields (RF-EMF) caused by a web of 5G network towers may be harmful and poses a threat to people and the environment exposed to it.
In fact, hundreds of scientists from over 30 countries have signed an appeal to the European Union Commission, warning about the potential consequences of rolling out the new 5G cellular network. It should be noted that these claims do not come with no evidence behind it. A number of substantial studies by some of the reputed health protection organizations have found a link between elevated RF radiation exposure and DNA damage.
This point of view, of course, rests at a polar opposite side on the specter of speculation surrounding 5G – as opposed to the obviously arrogant yet surprisingly widespread theory that the cell towers cause COVID.
Whichever reasoning came more convincing for the radical protesters against the new technology, the fact that many 5G cell towers were knocked down and set on fire in the UK indicates that a part of the public does not welcome 5G into their lives yet.
Costs are always an important part of the discussion when evaluating a project. When it comes to 5G, the price is not its strongest feature.
Current 4G carriers use 3-4x fewer towers than needed for 5G, and setting up those additional network points is not free. Not only the equipment costs a pretty buck, but it also has to be located somewhere. This means that placing a 5G transmitter requires either building a new tower or attaching it to an existing tower or building. This comes along with land use right, permits, construction, etc. – all kinds of associated costs.
Now, these costs are difficult to estimate precisely since each carrier has its own ways. That said, independent experts estimate an average installation cost of up to $200k per microcell. Each of these transmitters has a reach between 0.2 and 2 kilometers and can support up to 2000 devices. Read more about the 5G network structure here. Now, you can compare these figures with the number of potential users, along with the areas of their distribution… Avoiding to delve into complex math, it’s safe to say that setting up a next-gen broadband Internet system is quite expensive even for the industry giants.
As for the average consumer, replacing one’s old smartphone with a new one that supports 5G, in addition to advanced carrier plans, is also a hefty investment to consider.
Here are T-Mobile’s 5G plan options, just to have a clue:
… and immature.
Last but not least, it does seem like the revolutionary technology that’s been promised is just not ready yet. Yes, there are a lot of towers in major cities around the world. Yes, the new smartphones support the new technology. And yes – carriers are already offering us that lightning-fast 5G Internet “for just $69.99”.
All that being the case, the coverage is not great, neither are the 5G smartphone prices, nor the actual Internet speeds. There are still connectivity issues in buildings and densely-built areas, and providing 5G to urban areas is just economically disastrous.
Unfortunately, but this is the 5G’s harsh reality we have to face.
Speaking of dreams and reality, another obstacle on the 5G’s way to a bright future may be posed by one of the most daring entrepreneurs in modern history – Elon Musk.
The tycoon’s advancements to create his own universal data network called Starlink is a heavy counterforce to the “mainstream” 5G. What sets Starlink apart from other similar technologies is that it is meant to operate through a web of satellites circulating on a lower Earth’s orbit, as compared to most current space objects. In fact, the satellites fly so low you can see them in the night sky – just like the stars. And since they are linked (it seems so) the network is called Starlink.
Pretty creative, isn’t it?
What’s valuable for considering Starlink as an opponent to 5G is the history of the inventor’s startling projects, which daring, revolutionary, and most importantly – successful every time. Giants of the payment, automotive, and space industries have already fallen prey to Musk’s ambition. And given that 775 Starlink satellites already orbit the Earth – this is definitely something to watch for. Both for the average consumers and the key industry players.
So, is 5G overrated?
5G is clearly a promising technology throwing a lot of benefits on the table. However, the technical part of it is still underdeveloped. Definitely, a great idea, it has fallen victim to big business’s greedy advancements and is just overmarketed for the moment.
All in all, there’s still a long way for 5G to make to live up to the people’s expectations. And given the alternatives that emerge in the global Internet services market, it is not quite clear where will this path end.
Microservice vs Monolithic: The Ultimate Software Architecture Guide
Jul 21, 2020
The global IT industry evolves rapidly in size, shape, and form, and so do the software development practices applied all along. Like any evolutionary process, this one strives to maintain efficiency while gaining capabilities. To keep the progress going, the traditional ways of doing things have to be replaced.
For IT development, this means there is a point along the journey of software enhancement where we cannot continue to add structures upon structures of ever-increasing complexity, without sacrificing performance.
Historically, this point fell on the edge of 2011-2012, when software experts from a prominent workshop in Venice came up with the term Microservices to define a new architectural style they have been exploring at the time. Dubbed the fine-grained SOA (service-oriented architecture where app components connect via a network), it wasn’t an entirely new approach to product design, but rather a refined way of building service-oriented applications.
Strictly speaking, microservices divide the bulk of a product’s functionality into independent chunks of software, while preserving the cohesiveness of the system.
Here’s a general idea of the architectural difference when it comes to comparing microservices vs monolithic software:
Microservice vs Monolithic: Which software architecture is best?
Microservices are much like government decentralization, which gives power and responsibility to the regions while maintaining essential relations to keep the state solid. The opposite of that is centralized governance – where the decision-making is concentrated.
Now, the choice of a suitable model is dictated by your needs and setup.
A small project will hardly see the advantages of using microservices, just like a small state does not need decentralization. Bigger and more complex projects, on the other hand, may very well benefit from a more advanced design approach.
That said, it is not all that simple when you dig deeper. There are many factors to consider when comparing microservices and monolithic architecture.
Comparing Microservices and Monolithic software architecture is not an easy task. We have to remain scientifically objective, after all.
For that reason, a point system seems just right.
When it comes to the inherent performance of application architecture, there are two key indicators – network latency and data throughput. Latency represents the amount of time data takes to travel between two destinations.
Here’s how it works:
To pass information, bytes convert into an electromagnetic signal. It then travels via wires or air and is reassembled back into bytes by the receiving party. Now, we can cut down the decoding time. But since the signal takes time to travel, data transfer will always have a slight delay. It is a natural consequence of the basic laws of physics.
In this regard, having a localized, single-core system is superior to a network of interconnected clients operating with each other, often at long physical distances. While the latency of a microservice call is minuscule (around 25ms), the more calls – the higher the delay.
There are, of course, solutions that can minimize this gap, like running all calls in parallel or using a so-called fan-out pattern. In this case, the difference tends to zero as the calls increase. And still, Monolithics turn out slightly quicker every time.
The same is true for absolute data throughput (the amount of data processed over time).
A close call in the first standing, but still a point goes to the Monolithic architecture.
2. Resource usage & scalability
Now that we’ve touched on performance, let’s examine resources usage.
This is a tricky one.
At first glance, microservices calls use more resources than the monolith ones when doing the same amount of work.
However, since microservices can allocate resources as needed, they use them a lot smarter, decreasing the memory and CPU load. In addition, the more instances performed – the greater this difference is in favor of loosely coupled services.
Monolithic software can come ahead in individual cases (when a call transfers large amounts of data) but falls behind in all other scenarios.
The same principle works when you need to upgrade the computing capabilities as the requirements increase. By managing resources more efficiently, decentralized software easily scales the power up and down, adding or removing cloud computing servers as needed.
Clearly, a win for Microservices.
3. Development complexity
Speaking about the complexity of the development process.
While the good old monolithic apps call for greater skillsets from individual developers, microservices projects can be spread into smaller tasks between highly specialized devs.
Here’s an illustration to help you understand why:
At that, the overall amount of work is in often considerably greater with Microservices.
Unlike single-core projects, assembling multiple modules may involve several source codes, frameworks, and coding languages for that matter.
Data synchronization also adds up to the complexity of running dispersed software as opposed to its locally-contained rival. Once again, some tools tackle the issues. Nevertheless, a monolithic architecture is innately more clear and transparent.
Another point in favor of Monolithics.
Are you still there? Great!
4. Deployment & reliability
One of the main reasons why companies prefer microservices are the stunning deployment opportunities it provides.
Compared to the bulky structure of monolithic software, its counterpart is simple and flexible enough to have updates as frequently as desired. In fact, you don’t have to roll out the entire system after changing some of the functionality. All you need to do is redeploy that particular service.
More so, modifying a microservice does not affect the dependent services. Therefore, it won’t threaten the entire system’s work should there be a program malfunction. Whereas even a minor code error can stall the entirety of software built with a monolithic approach.
This boosts the software’s reliability, eliminating a whole bunch of critical operational issues.
Something like having more engines on a plane…
In addition, microservices are a lot easier to test. A limited number of features dedicated to each of the services substantially decreases the number of dependencies involved. This makes it much more simple to write and run tests. Therefore, you can release the product a lot earlier.
In this one, Microservices come out ahead.
So far the score is 2 – 2.
5. Technological flexibility
This is where things get interesting.
There are countless development technologies on the market. Some of them are quick, some – are easier to build. A part of them is better for billing, others are a good suit for data processing, or have better security… You name it.
Microservices empower you to add all of it to the arsenal, taking the best from each technology.
It’s like having all of the superpowers at one’s disposal.
Like that, a piece of software can be quick, rigid, capable, and secure all at the same time. It’s no wonder why the method is so popular among architects of complex IT projects like Netflix, Medium, and Uber.
This will, of course, require hiring a whole bunch of specialists to implement, as mentioned earlier. But hey, that development complexity point is already granted to Monolithics, so we can’t complain.
Another win for microservices.
6. Team communication
Finally, team communication plays a key part in the process of IT product development, and it can be affected by software architecture choice.
Here’s the thing: by dividing the software into smaller chunks, Microservices not only distribute the tasks but also the teams, decreasing the number of individual communication channels between devs.
This goes in line with Amazon’s well-known “pizza rule”, which states that a team is too big and inefficient if it can’t be fed with two pizzas.
You decide what’s right.
I sure wouldn’t be arguing with Amazon’s expertise in project management.
So, the final round goes to the Microservices architecture, too, and the score is: Microservices – 4, Monolithics – 2
Monolithics, it was a fair battle…
Part 2: How do microservices work?
While it did seem like a logical conclusion, it wouldn’t be the ULTIMATE software architecture guide if we didn’t dig deeper into the subject.
So, let’s move on.
Now that we’ve explored how microservices are different from the traditional monolithic software, let’s examine the technology behind the revolutionary architecture.
Just like monolithic apps, modular software can be built with a wide range of coding languages and frameworks. Therefore, most of the rules for choosing a tech stack apply here as well. That being the case, a microservices tech stack is effectively larger and much more versatile than that of traditional software.
Loosely-coupled apps are very complex structure-wise. So, many aspects of the system’s cohesiveness have to be thought through before jumping into the whirlpool of the dev process.
In our case, it is worthwhile to go over all of it gradually – one functionality at a time. So, let’s jump in!
First of all, any software has to run somewhere.
There are three main hosting options to consider for microservices:
Local server – a traditional enterprise computing model. Companies maintain equipment and software in a confined data center, having direct control over its operation.
Public cloud – a rather modern approach. Here, shared computing resources are provided over the internet and are managed on the side of the cloud provider. We’ve already written about on-demand software recently.
Private cloud – offers opportunities similar to the public cloud. In this case, though, companies own and manage remote server capacity in-house (for security or compliance reasons, mostly).
It should be noted that there are also hybrid cloud-hosting solutions, but that is a topic for another blog post…
In most cases, public cloud hardware is the go-to choice for running microservices. It offers virtually unlimited processing capabilities on rather flexible terms.
And while there is an array of remote infrastructure providers, the most popular of them are represented by:
AWS (Amazon Web Services)
Google Cloud Platform
VMs & container management
Now, there are two principal ways of using cloud resources – virtual machines and containers (each containing individual functionality).
Both use remote hardware to perform tasks. Now, VMs emulate entire systems along with the operational systems. Whereas containers share the OS and therefore have a lot of common functionality that needs not be executed separately.
This saves a ton of resources, providing a tenfold launch-time difference and a major cut in RAM and CPU usage, in favor of containers, of course. Having less overhead and close to zero weight, it is a much more favorable environment for complex applications.
However, while it’s very convenient to have individual containers for each of the services, it is another challenge to successfully manage it all. Crucial tasks like automation of deployment, scaling, and networking, add up to the complexity of running loosely coupled software.
This is where container orchestration tools come in handy, effectively tackling these kinds of issues.
In this regard, the most popular choices on the market are:
Other solutions from major cloud providers like Amazon, Google, and Azure.
We have already learned that microservices allow using different tech for individual software components.
On one hand, this gives the flexibility to assign the best-fitting technology for tackling different tasks within the system. On the other, however, it requires establishing effective interaction between those app components.
This is exactly what a service mesh is there for.
A dedicated infrastructure layer, it enables the services to interact with each other via local “sidecar” proxies instead of calling each other directly over the network. In essence, it is an interpreter between services that often “speak” different programming languages.
Service mesh facilitates horizontal (service to service) communication, as opposed to the API gateways that control vertical (client to server) networking. It is also different from container orchestration tools, which are responsible for resource management only.
Some of the widely-applied service mesh solutions include:
When designing modular apps, it is crucial to determine how program components will communicate with the system.
Typically, this task is executed via APIs.
API stands for Application Programming Interface. It enables communication between two systems (for example – your laptop and an app), determining the data transfer protocol. Something like a moderator of the client-to-service conversation who ensures that the message “gets through”.
APIs operate via ‘requests’ and ‘responses’. When a system receives a request it returns a response. The destination of that response represents an endpoint – essentially, one end of a communicational channel between a server and an external device.
Now, what comes down to one communication channel in a monolithic app, may generate an array of those in microservices. This is because splitting software into multiple pieces implies that a single client request may call for separate responses from the services, resulting in multiple endpoints. An API gateway sorts it all up by providing a single point of entry into the system…
Ok, I know that just spilled over the sane limit of tech terms per paragraph.
Let’s break it down.
Imagine a typical blog page like this one. It contains a text field, a list of recommended articles, a comment section, a login form, a sharing functionality, etc.
In microservices, a separate module owns each of the described components’ data sets. So, when you open up that page you actually communicate with a set of micro-apps.
If these calls were direct, your browser would have to send separate requests at all of those services (each with an individual web address) in order to assemble the page. For a number of reasons that furtherly exceed this article’s threshold for heavy terminology, such an option is inefficient. You may read about network latency here.
Another option is to have a distributing entity to sort through multiple client requests and return a single, “comprehensive” response. Kind of like the packing assistant in a grocery store. You know, the one gathering your stuff while you talk to the cashier and check out – to save everyone time.
That’s API getaways.
Some of the best API Gateway solutions are provided by Amazon, Ambassador, Kong, Microsoft, Akamai, Mulesoft, Google, and Express.
Privacy issues accompany any IT product development process.
At that, the nature of microservices poses an elevated threat of security breach, putting additional pressure on software architects.
More so, securing the containers in which microservices run joined the list of the industry’s main data-protection challenges in 2018.
This is happening for two reasons.
For first, it is a well-known fact that a system’s complexity is inversely proportional to its reliability. This is especially true when it comes to software vulnerabilities. Increased interaction between microservices comes hand in hand with additional communication channels – which means more points of potential penetration.
To make things worse, microservice apps are often hosted along with third-party software, on the same physical server. Even the word combination “shared environments” does not sound too safe. Forget about the multitude of less obvious ways things can go wrong in a complex cloud infrastructure hosting disintegrated software.
Luckily for IT enterprises, there are solutions for these issues, too:
Last but not least on the list of the main technologies behind microservices is the so-called middleware. These tools are responsible for additional coherence-related tasks like load-balancing, proxies, caching, and routing. While somewhat similar to the already mentioned gateways, it doesn’t expose microservices as an API does.
In terms of microservices middleware, these are the market leaders:
Although there are no reasons for smaller teams and projects to give up the well-established and proven monolithic software architecture, Microservices are more progressive and seem to be staying around for the foreseeable future.
Yes, the approach is more resource-demanding and complex than traditional program development techniques. However, its benefits outweigh the cost, especially for big projects where software reliability and production speed play a key part. The array of technologies backing microservices-based apps is stunning. From design and implementation to deployment and maintenance – versatile program tools are there at one’s aid on every step of the microservices development process.
Contactless payments: spike during COVID-19 and future
Apr 4, 2020
Spike of popularity among contactless payment during Coronavirus pandemic
Coronavirus poses a huge threat to the economies and populations of all countries in the world. One of the sources of the virus’s spread is cash, which carries a huge amount of bacteria. ATMs with cash-recycling functions become a channel of disease transmission. A way out of it is making contactless payments, which allow you to pay instantly and avoid endangering yourself.
Even the World Health Organization recommends ditching cash in favor of contactless payments to prevent the spread of COVID-19. WHO issued this recommendation after China and Korea began separating and disinfecting used banknotes known to carry viruses and bacteria.
A representative of the World Health Organization noted in a recent interview with The Telegraph:
“We know that money often passes from hand to hand and can collect all kinds of bacteria and viruses. We advise people to wash their hands after handling banknotes and try not to touch their face. Wherever possible, it is advisable to use contactless payments to reduce the risk of the infection spreading.”
When will the economy recover?
Opinions vary greatly on how slow the global economy will grow.
The conclusion of the American Institute of International Finance looks quite pessimistic. IIF estimates the global economy to grow by a maximum of 1% in 2020 – the lowest since the 2007-2008 crisis. In China – the source of the virus – GDP growth will slow to 4% instead of the previously expected 5.9%.
Consulting company McKinsey&Company has outlined three scenarios of crisis development.
The softest scenario is the elimination of coronavirus outbreaks up to the second quarter of 2020. In this case, the global GDP will grow by about 2% instead of 2.5%. If the pandemic persists beyond the first half of the year, global economic growth will not exceed 1.5%. In case the pandemic continues to the second or third quarter, global GDP could fall by 1.5%.
So, how can you help the world? One of the options is by using contactless payments. A safe way of using money, its popularity is increasing tenfold due to the COVID-19 pandemic.
How does the technology work?
In most cases, contactless payments are enabled by NFC chips that are found in most modern smartphones and tablets, as well as in many smartwatches and smart bracelets. These chips can transfer encrypted data from a customer’s bank card to another chip, for example, in a POS terminal at a store. The data exchange takes 1-2 seconds, followed by a successful payment.
The contactless connection between the devices is conducted via radio signal (Radio Frequency Identification technology). NFC-chip uses a special radio frequency (13.56 MHz), which works only if the devices are close to each other.
There are two main options for using NFC technology in retail payments:
The first way is payments by banking cards supporting contactless technologies (for example, MasterCard PayPass/ Visa PayWave).
The second way, which is gaining popularity, is by mobile devices through paying services (Apple Pay, Android Pay, Samsung Pay).
To use a smartphone as a contactless payment tool, you need to tie a banking card to your smartphone using a special application. At this stage, the app generates and stores an encrypted “key” (token). After that, you can pay with your smartphone without using the card.
Contactless payment is becoming more and more entrenched in mobile devices. Near Field Communication (NFC) technology is already available in Apple, Samsung, and other mobile devices. In addition to NFC, Samsung has introduced magnetic security transfer (MST) technology for smartphones to interact with terminals that accept magnetic stripe cards.
Wearable devices also influence contactless payments. Some of the leading tech companies like Apple and Samsung produce watches with an embedded NFC chip. Traditional watchmakers, like Mondaine and Swatch, are also keeping up.
Now NFC technology is not only extremely convenient.
Contactless payments offer settlements security provided by the global digital tokenization platform Mastercard Digital Enablement Service (MDES). MDES allows turning any device with an NFC chip and an Internet connection into a secure payment tool, through creating a unique token to protect transactions. A token is a 16-digit combination tied to a user’s bank card number, which is unique for each connected device.
Tokenized payments hide bank card details. To verify the payer by the bank during the payment process Mastercard transforms the token into the card number.
Benefits of Contactless Payments
The main advantages of using contactless payments:
Protection from COVID-19. Using contactless payments you avoid contact with things (money) that potentially can be infected with the coronavirus.
Simplicity. One-touch purchase payment without pin code and signature. You need less time to pay for the purchase.
Quick. Payment using contactless technology occurs almost instantly. This saves time for the client and makes the work of the cashier more effective.
It’s an innovation. The most modern payment technology. If a business uses contactless payment – it gains respect from customers.
But how contactless payments affect business and economy? What are the examples of successful implementation of contactless payment technology?
Here’s a map of the contactless payment limits in various parts of the world:
In 2016, eCommerce giant Amazon launched a new type of Amazon Go offline store in Seattle – with no cash registers or cashiers. In there, buyers just pick up the products and leave the store, and payments are done automatically, contactless, and discreetly.
The company combines RFID (radio frequency identification technology) with smart video cameras. The system records when a customer takes an item off the shelf, while video cameras locate the customer inside the store. Combined data analysis allows the system to identify who took which items and record it in a shopping list in the Amazon mobile app.
This approach allows customers to perform a reverse operation, return a product to the shelf and thus automatically exclude it from the virtual shopping basket. Products can be carried out in pockets or hands. Once a person goes through the turnstiles, the money is automatically deducted from his Amazon account.
Amazon Go, where the whole process of shopping (from selection to payment) is carried out by the buyer, brings 50% more profit than traditional stores.
The company is developing a new payment method that will allow consumers to pay for their purchases using the palm of their hand, without using a bank card.
Amazon had filed a patent for a “contactless biometric identification system” with a palm scanner. The tech giant is developing this project together with Visa and Mastercard. Major banking institutions, like JPMorgan and Wells Fargo, are also taking part in its development.
The company plans to offer customers an opportunity to tie the data of a bank card to their palms. This would allow them to make a purchase with one touch, without using “plastic”. Also, the company plans to introduce such payment in the supermarket chain “Whole Foods”. Clearly, Amazon is the leading contactless payment developing company in the world.
Contactless payments are becoming part of the “on-demand economy”. Taxi applications such as Uber, Bolt, Gett can be used for reference. These services allow users to tie a card to a mobile application once and then automatically pay for their trips without touching neither cash nor the interface of the application itself.
Besides mobile platforms, functions of contactless payments are available both on social networks (Facebook, Twitter) and messengers like Telegram, WhatsApp, and Viber.
Over the next decade, we will see more changes in the banking industry than in the last 100 years. KPMG Global’s research “The Future of Digital Banking” confirms that technologies such as Artificial Intelligence, Blockchain, Biometrics, 5G, AR/VR will have the greatest impact on the financial services industry in the next 10-15 years. So, voice command and biometrics can replace contactless payments pretty soon. Thanks to the “Internet of Things”, any device can become a digital channel for paying for goods and services.
Examples of successful implementation of contactless payments into banking systems are Monobank, Revolut, N26. All these banks are mobile, they have no offices, but are incredibly popular. Of course, among youth mostly.
These banks are actively competing with each other.
For example, N26 is one of the most highly regarded startups in the world. Revolut has attracted more than $350 million in investments for its development, valued at over $2 billion. Monobank reached the mark of 2 million users in just 3 years – an impressive result.
Effect on business and economy
The sphere of contactless payments is huge and is backed by eCommerce companies, blockchain platforms, mobile developers, financial projects, banks, and even companies specializing in passenger transportation.
The rapid development of mobile technologies and contactless payments is creating a new model of user behavior. This behavior model prefers the active use of smartphones, contactless payments and various connected devices in everyday life.
Banks that do not think about the development of contactless payments may lose both clients and time in the future.
One of the main factors that stimulate the development of contactless payments in the world is the desire to lead innovation. Contactless technologies are a sign of the modernity of banks and businesses.
The future of contactless payments
Many countries have already adopted cashless technology. Some experts even compare the number of NFC payments a country has to its economic potential. Although cash payments are still prevalent in developing countries, high-population states like China, India, Brazil, and the USA are already entering a contactless future.
According to Business Insider, the flagship for developing countries is China with its unique payment technology integrated into a local social network – WeChat. This subsidiary of China’s giant Tencent is a mix of services, one of which is WeChat Pay, a contactless payment system for more than 1 billion users. It allows sending money directly from one smartphone to another within the app, furtherly boosting the commercial boom in China.
According to the Merchant Savvy web service too, China dominates in contactless payment development. In February 2019, 1 out of 9 people on the planet used Chinese payment systems to send and receive cash during the Chinese New Year.
In the US and Europe, card payments are still more popular than mobile payments. There are various reasons for this, but among the main ones are conservatism, as well as firmness of the authority of EMV chip cards, which are very popular among middle-class people.
Contactless payments are the future of money transfers. The coronavirus outbreak and its worldwide spread significantly increase the popularity of contactless payments because cash is transmitting COVID-19. Contactless payments are the best way to prevent the spreading of COVID-19.
Contactless payments are a very convenient way to transfer money. The business actively implements new technologies in its own work and increases NFC popularity. The cases described in this article demonstrate the potential for spreading contactless payments in the world.