Backend without problems. A miracle, or future?

Hello!

Friends, do not you tell me, and do you know how to make a backend for server-based/client-server applications. In a perfect world it all starts with the design of the architecture, then select the platform, then figure out the right amount of machines, both virtual and not. Then there is the process of raising architecture for development/testing. Are you all ready? Well I went to write the code to do the first commit, update code on the server from the repository. Opened the console/browser checked and went. While it's simple, and what's next?

Over time, the architecture inevitably grows, new services, new servers and then it is time to think about scalability. Servers a little more than 1?, — it should be something like logs all together. Immediately in my head thought about the aggregator logs.

And when something, God forbid, falls, immediately come thoughts about monitoring. Familiar, isn't it? So my friends and I ate it. And when you're on the team — there are also other complications.

You can immediately think that we have not heard anything about aws, jelastic, heroku, digitalocean, puppet/chief, travis, git-hooks, zabbix, datadog, loggly, I can Assure you it is not. We tried to be friends with each of these systems. More precisely, we have set up each of these systems for yourself. But did not receive the desired effect. There was always some pitfalls and part of the job I would like at least to automate.

Living in such a world, quite a large amount of time, we thought, "well, we're developers, let's do something." Figuring problems, accompanying us at every stage of creation and development of the project — they write them on a separate page and turned them into features of the future service.
And 2 months later this was born the service — lastbackend.com:



the Design


We started with the very first issues when building a server system, namely by building a visual scheme of the project. Don't know about you, but for me to see which element with which is associated, as flowing data streams is admired in comparison with a list of servers and configured the environment in the wiki or in a Google Doc. But you be the judge.



The design process of the scheme is painfully simple and intuitive: on the left a list of items backend, to the right of the working field. Here just take and drag. It is necessary to enable node.js element to connect to mongodb? Please take the mouse and hold the connection.



setup and scaling


Each element in the system is unique, you need to configure it and if you need to enable auto-scaling. If the element is a load balancer, there is a place where you specify selection rules upstream. If the item has source code — specify its repository, environment variables, dependencies. The system itself will download, install, launch.



And of course we have thought about auto-deploy — changed the source code in a specific branch of tricky and quickly updated the item. We tried to make everything convenient, as we use the first and see all the flaws.

Deployment


And here is the interesting point. When the diagram is ready, deploying it takes few seconds. Sometimes longer, and sometimes even instantly. It all depends on the types of elements and its source code. We are mainly programmed for node.js and our scheme is deployed for a couple of seconds. The most exciting moment — to see all configured elements on the diagram come alive and light up the indicators of the current state of the element.


Forgot to add that we have the ability to run each element, different hosting/data center. For example to balance traffic between the two countries, can be run in one country, another in the second, and the database for example somewhere on the center post. An example of course not, but in General, I think the possibility is quite useful, especially if you know how and where to apply it.



That's basically the main part, allowing fast, beautiful and without problems to lift a server part of any project, but we immediately thought of further problems, namely:

log Aggregation


Each element aggregare log in a single repository logs where we can see and analyze, do a search and sampling. Now there is no need to connect via ssh and grep-om to look for any hidden information, or just to analyze data

Monitoring and alerting system


Of course we ourselves are hunting to go fishing or just relax knowing that if something happens not to you about it be sure to report back. Here we do so, that would be about all the failures were instantaneous information. Now the head does not hurt, that can miss something important.

Summary


That's basically the main problems that we tried to solve our service and somehow make life easier for the brother developer.
Of course I couldn't list all the features, tried to highlight only the most important, but comments can more thoroughly answer all questions and requests.

Thank you for your attention. I'd love to read your feedback to answer your questions, record your advice and suggestions and to give access to closed beta testing, which according to preliminary plans will begin on may holidays.

Get invitation for the beta directly at this link. On the opening day of testing will receive a letter with access data.

PS


So friends, if you're interested, you can start a series of technical articles in the stack of used technologies, namely node.js, mongodb, redis, sockets, angular, svg, etc.
Article based on information from habrahabr.ru

Комментарии

Популярные сообщения из этого блога

ODBC Firebird, Postgresql, executing queries in Powershell

garage48 for the first time in Kiev!

The Ministry of communications wants to ban phones without GLONASS