Does splitting a potentially monolithic application into several smaller ones help prevent bugs? [on...

How to preserve electronics (computers, ipads, phones) for hundreds of years?

Relations between homogeneous polynomials

Derivative of an interpolated function

C++ lambda syntax

Why didn't Voldemort know what Grindelwald looked like?

How do I lift the insulation blower into the attic?

Connection Between Knot Theory and Number Theory

How can a new country break out from a developed country without war?

Make a Bowl of Alphabet Soup

How do you justify more code being written by following clean code practices?

Why can't I get pgrep output right to variable on bash script?

Not hide and seek

Why doesn't Gödel's incompleteness theorem apply to false statements?

Weird lines in Microsoft Word

Sort with assumptions

Mortal danger in mid-grade literature

How do you say "Trust your struggle." in French?

What do the positive and negative (+/-) transmit and receive pins mean on Ethernet cables?

"Marked down as someone wanting to sell shares." What does that mean?

When is the exact date for EOL of Ubuntu 14.04 LTS?

How would a solely written language work mechanically

Error in master's thesis, I do not know what to do

What properties make a magic weapon befit a Rogue more than a DEX-based Fighter?

1 John in Luther’s Bibel



Does splitting a potentially monolithic application into several smaller ones help prevent bugs? [on hold]


Problem with understanding “seam” wordApplications Architecture - fewer big systems vs more smaller systemsMicro vs Monolithic Server architectureSeparating Code into Smaller Files in CDesign, how to utilize The Hardware (multiple threads and/or GPU) while indexing (via a database) a very large set of binary filesWhich approach should I use to split a monolithic application into several microservices?Is there a standard for documenting a program's high-level architecture?Relative merits of monolithic repository over multiple smaller onesmany sub application or a big oneWhere to store data for Microservices Architecture?Splitting application into multiple but keeping database same













48















Another way of asking this is; why do programs tend to be monolithic?



I am thinking of something like an animation package like Maya, which people use for various different workflows.



If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?










share|improve this question













put on hold as primarily opinion-based by gnat, Greg Burghardt, BobDalgleish, wheaties, Thomas Owens Mar 15 at 15:30


Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.














  • 9





    If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain? Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.

    – Laiv
    Mar 12 at 13:24








  • 37





    I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.

    – DarthFennec
    Mar 12 at 16:34






  • 2





    @DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!

    – corsiKa
    Mar 12 at 16:38






  • 22





    @corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.

    – DarthFennec
    Mar 12 at 16:50








  • 5





    @corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.

    – Davor Ždralo
    Mar 12 at 19:52
















48















Another way of asking this is; why do programs tend to be monolithic?



I am thinking of something like an animation package like Maya, which people use for various different workflows.



If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?










share|improve this question













put on hold as primarily opinion-based by gnat, Greg Burghardt, BobDalgleish, wheaties, Thomas Owens Mar 15 at 15:30


Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.














  • 9





    If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain? Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.

    – Laiv
    Mar 12 at 13:24








  • 37





    I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.

    – DarthFennec
    Mar 12 at 16:34






  • 2





    @DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!

    – corsiKa
    Mar 12 at 16:38






  • 22





    @corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.

    – DarthFennec
    Mar 12 at 16:50








  • 5





    @corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.

    – Davor Ždralo
    Mar 12 at 19:52














48












48








48


10






Another way of asking this is; why do programs tend to be monolithic?



I am thinking of something like an animation package like Maya, which people use for various different workflows.



If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?










share|improve this question














Another way of asking this is; why do programs tend to be monolithic?



I am thinking of something like an animation package like Maya, which people use for various different workflows.



If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?







design architecture maintainability application-design






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 12 at 11:38









dnvdnv

349125




349125




put on hold as primarily opinion-based by gnat, Greg Burghardt, BobDalgleish, wheaties, Thomas Owens Mar 15 at 15:30


Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.









put on hold as primarily opinion-based by gnat, Greg Burghardt, BobDalgleish, wheaties, Thomas Owens Mar 15 at 15:30


Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.










  • 9





    If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain? Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.

    – Laiv
    Mar 12 at 13:24








  • 37





    I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.

    – DarthFennec
    Mar 12 at 16:34






  • 2





    @DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!

    – corsiKa
    Mar 12 at 16:38






  • 22





    @corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.

    – DarthFennec
    Mar 12 at 16:50








  • 5





    @corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.

    – Davor Ždralo
    Mar 12 at 19:52














  • 9





    If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain? Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.

    – Laiv
    Mar 12 at 13:24








  • 37





    I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.

    – DarthFennec
    Mar 12 at 16:34






  • 2





    @DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!

    – corsiKa
    Mar 12 at 16:38






  • 22





    @corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.

    – DarthFennec
    Mar 12 at 16:50








  • 5





    @corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.

    – Davor Ždralo
    Mar 12 at 19:52








9




9





If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain? Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.

– Laiv
Mar 12 at 13:24







If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain? Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.

– Laiv
Mar 12 at 13:24






37




37





I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.

– DarthFennec
Mar 12 at 16:34





I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.

– DarthFennec
Mar 12 at 16:34




2




2





@DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!

– corsiKa
Mar 12 at 16:38





@DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!

– corsiKa
Mar 12 at 16:38




22




22





@corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.

– DarthFennec
Mar 12 at 16:50







@corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.

– DarthFennec
Mar 12 at 16:50






5




5





@corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.

– Davor Ždralo
Mar 12 at 19:52





@corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.

– Davor Ždralo
Mar 12 at 19:52










10 Answers
10






active

oldest

votes


















92














Yes. Generally two smaller less complex applications are much easier to maintain than a single large one.



However, you get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this orchestration can go wrong in various ways, even though every application might function perfectly. Having a million tiny applications has its own special problems.



A monolithic application is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. It's only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y".






share|improve this answer





















  • 6





    Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.

    – JimmyJames
    Mar 12 at 14:31






  • 63





    "Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.

    – Doc Brown
    Mar 12 at 16:29






  • 10





    "Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.

    – Voo
    Mar 12 at 17:46








  • 11





    @Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.

    – Voo
    Mar 12 at 18:33








  • 9





    I think the discussion here can be summarized with 2 statements: 1) Splitting an app itself does not make an app more maintainable - on the contrary, it provides another possible point of failure 2) Splitting an app forces you to think about where to split it, which provides an advantage compared to a monolith where that has never been done.

    – R. Schmitz
    Mar 13 at 11:22



















48















Does splitting a potentially monolithic application into several smaller ones help prevent bugs




Things are seldom that simple in reality.



Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.



However,




  • even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app


  • as Ewan already mentioned, the interaction of several components introduce additional risks and bugs. And debugging an application system with complex interprocess communication can be significantly harder than debugging a single-process application



This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are, and how those interfaces are used.



In short, this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.




why do programs tend to be monolithic




Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).




would they not be easier to maintain




"Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.






share|improve this answer





















  • 3





    w.r.t. your last sentence, Conway's law says that system structure tends to mimic org. structure: devs/teams are more familiar with some parts than others, so whilst fixes/improvements should happen in the most relevant part, it may be easier for a dev to hack it into "their" parts rather than (a) learn how that other part works or (b) work with someone more familiar with that part. This is related to the "seams" @TKK mentions, and how difficult it can be to find and enforce "correct"/simple ones.

    – Warbo
    Mar 13 at 16:35



















34














I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.



Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.



You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.



What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.






share|improve this answer
























  • But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?

    – The Great Duck
    Mar 13 at 2:07








  • 3





    @TheGreatDuck That sounds like it would also eliminate a non-trivial amount of users who don't like having to switch between different applications. ;) But yes, eliminating features will generally lead to simpler code. Eliminate spell-checking and you will remove the possibility of having spell-checking bugs. It's just rarely done because the feature was added because someone wanted it.

    – Odalrick
    Mar 13 at 10:24








  • 1





    @TheGreatDuck Surely the design of the UX should come before any architectural decisions. There's no point having the best designed architecture if nobody uses your program. First decide what you want to build and based on the decide on the technical details. If two separate applications is preferred, go for it. You can still share a lot of code via shared libraries though.

    – Voo
    Mar 14 at 7:29











  • Is it really true to say the the complexity of the system is due to the tight coupling of the parts? I would want to say that the total complexity increase if you partition your system as you introduce indirection and communication, although the complexity of the specific individual components are isolated in a bounded state of more limited complexity.

    – Alex
    Mar 14 at 22:22











  • @Alex I think the confusion might stem from a misunderstanding what coupling means in this context. The wiki article is not particularly great, but gives some idea. Low coupling basically means having modules that have clear responsibilities and bounds. High coupling leads to code where if you want to change one thing you'll have to fix code in lots of other parts, including other modules.

    – Voo
    Mar 15 at 8:12



















12














Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.






share|improve this answer
























  • great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.

    – Trevor Boyd Smith
    Mar 12 at 17:32



















10














It's important to remember that correlation is not causation.



Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)



But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)



Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.



To quote C.A.R. Hoare:




There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.




If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:




The first method is far more difficult.




And later in the same source (the 1980 Turing Award Lecture):




The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.







share|improve this answer































    4














    This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.



    Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.



    A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.



    The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.



    When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.






    share|improve this answer































      4














      No. it does not make it easier to maintain. If anything welcome to more problems.



      Why?




      • The programs are not orthogonal they need to preserve each others work in so far as is reasonable, which implies a common understanding.

      • Much code of both programs are identical. Are you maintaining a common shared library, or maintaining two separate copies?

      • You now have two development teams. How are they communicating?


      • You now have two products that need:




        • a common UI style, interaction mechanisms, etc... So you now have design problems. (How are the dev teams communicating again?)

        • backward compatibility (can modeller v1 be imported into animator v3?)

        • cloud/network integration (if its a feature) now has to be updated across twice as many products.




      • You now have three consumer markets: Modellers, Animators and Modeller Animators




        • They will have conflicting priorities

        • They will have conflicting support needs

        • They will have conflicting usage styles



      • Do the Modeller Animators have to open two separate applications to work on the same file? Is there a third application with both functions, does one application load the functions of the other?

      • etc...


      That being said smaller code bases equal easier to maintain at the application level, you're just not going to get a free lunch. This is the same problem at the heart of Micro-Service/Any-Modular-Architecture. Its not a panacea, maintenance difficulty at the application level is traded for maintenance difficulties at the orchestration level. Those issues are still issues, they just aren't in the code base any more, they will need to be either avoided, or solved.



      If solving the problem at the orchestration level is simpler then solving it at each application level then it makes sense to split it into two code bases and deal with the orchestration issues.



      Otherwise no, just do not do it, you would be better served by improving the internal modularity of the application itself. Push out sections of code into cohesive and easier to maintain libraries that the application acts as a plugin to. After all a monolith is just the orchestration layer of a library landscape.






      share|improve this answer































        3














        There were a lot of good answers but since there is almost a dead split I'll throw my hat into the ring too.



        In my experience as a software engineer, I have found this to not be a simple problem. It really depends on the size, scale, and purpose of the application. Older applications by virtue of the inertia required to change them, are generally monolithic as this was a common practice for a long time (Maya would qualify in this category). I assume you're talking about newer applications in general.



        In small enough applications that are more-or-less single concern the overhead required to maintain many separate parts generally exceeds the utility of having the separation. If it can be maintained by one person, it can probably be made monolithic without causing too many problems. The exception to this rule is when you have many different parts (a frontend, backend, perhaps some data layers in between) that are conveniently separated (logically).



        In very large even single concern applications splitting it up makes sense in my experience. You have the benefit of reducing a subset of the class of bugs possible in exchange for other (sometimes easier to solve) bugs. In general, you can also have teams of people working in isolation which improves productivity. Many applications these days however are split pretty finely, sometimes to their own detriment. I have also been on teams where the application was split across so many microservices unnecessarily that it introduced a lot of overhead when things stop talking to each other. Additionally, having to hold all of the knowledge of how each part talks to the other parts gets much harder with each successive split. There is a balance, and as you can tell by the answers here the way to do it isnt very clear, and there is really no standard in place.






        share|improve this answer








        New contributor




        CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.
















        • 2





          My first job as a programmer was as a millenium-bug programmer. The software I was working on was split into hundreds of little programs which all did a little part, strung together with batch files and using files to communicate state. It was a big mess, invented in a time where computers were slow, had little memory and storage was expensive. When I worked with it, the code was already 10-15 years old. Once we were done they asked my advice and my advice was to convert everything to a new monolithic application. They did and a year later I got a big thank you.

          – Pieter B
          Mar 13 at 17:26













        • @PieterB I have had a similar experience. "Cutting edge" tech is unfortunately a very large cargo cult in a lot of ways. Instead of choosing the best method for the job many companies will just follow whatever a FAANG is doing at the time without any question.

          – CL40
          Mar 13 at 17:33











        • and also: what may come out as a monolithic application once compiled, may be a very modular application, code wise.

          – Pieter B
          Mar 13 at 18:09



















        1














        For UI apps it is unlikely to decrease overall amount of bugs but will shift balance of bug mix toward problems caused by communication.



        Speaking of user facing UI applications/sites - users are extremely non-patient and demand low response time. This makes any communication delays into bugs. As result one will trade potential decrease of bugs due to decreased complexity of a single component with very hard bugs and timing requirement of cross-process/cross-machine communication.



        If units of the data the program deals with are large (i.e. images) then any cross-process delays would be longer and harder to eliminate - something like "apply transformation to 10mb image" will instantly gain +20mb of disk/network IO in addition to 2 conversion from in-memory format to serializabe format and back. There is really not much you can do to hide time needed to do so from the user.



        Additionally any communication and especially disk IO is subject to AntiVirus/Firewall checks - this inevitably adds another layer of hard to reproduce bugs and even more delays.



        Splitting monolithic "program" shines where communication delays are not critical or already unavoidable




        • parallelizable bulk processing of information where you can trade small extra delays for significant improvement of individual steps (sometimes eliminating need for custom components by using off-the-shelf once). Small individual step footprint may let you use multiple cheaper machines instead of single expensive one for example.

        • splitting monolithic services into less coupled micro-services - calling several services in parallel instead of one most likely will not add extra delays (may even decrease overall time if each individual one is faster and there are no dependencies)

        • moving out operations that users expect to take long time - rendering complicated 3d scene/movie, computing complex metrics about data,...

        • all sorts of "auto-complete", "spell-check", and other optional aids can and often made to be external - most obvious example is browser's url auto-suggestions where your input send to external service (search engine) all the time.


        Note that this applies to desktop apps as well as web sites - user facing portion of the program tends to be "monolithic" - all user interaction code tied to single piece of data is usually running in a single process (it is not unusual to split processes on per-piece-of-data basis like HTML page or an image but it is orthogonal to this question). Even for most basic site with user input you'll see validation logic running on the client side even if making it server side would be more modular and reduce complexity/code duplication.






        share|improve this answer








        New contributor




        Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.




























          0















          Does [it] help prevent bugs?




          Prevent? Well, no, not really.





          • It helps detect bugs.

            Namely all the bugs you didn't even know you had, that you only discovered when you tried to split that whole mess into smaller parts. So, in a way, it prevented those bugs from making their appearance in production — but the bugs were already there.


          • It helps reduce the impact of bugs.

            Bugs in monolithic applications have the potential to bring down the whole system and keep the user from interacting with your application at all. If you split that application into components, most bugs will —by design— only affect one of the components.


          • It creates a scenario for new bugs.

            If you want to keep the user experience the same, you will need to include new logic for all those components to communicate (via REST services, via OS system calls, what have you) so they can interact seamlessly from the user's POV.

            As a simple example: your monolithic app let users create a model and animate it without leaving the app. You split the app in two components: modeling and animation. Now your users have to export the modeling app's model to a file, then find the file and then open it with the animation app... Let's face it, some users are not gonna like that, so you have to include new logic for the modeling app to export the file and automatically launch the animation app and make it open the file. And this new logic, as simple as it may be, can have a number of bugs regarding data serialization, file access and permissions, users changing the installation path of the apps, etc.


          • It is the perfect excuse to apply much needed refactoring.

            When you decide to split a monolithic app into smaller components, you (hopefully) do so with a lot more knowledge and experience about the system than when it was first designed, and thanks to that you can apply a number of refactors to make the code cleaner, simpler, more efficient, more resilient, more secure. And this refactoring can, in a way, help prevent bugs. Of course, you could also apply the same refactoring to the monolithic app to prevent the same bugs, but you don't because it's so monolithic that you're afraid of touching something in the UI and breaking business logic ¯_(ツ)_/¯


          So I wouldn't say you're preventing bugs just by breaking a monolithic app into smaller components, but you're indeed making it easier to reach a point in which bugs can be more easily prevented.






          share|improve this answer






























            10 Answers
            10






            active

            oldest

            votes








            10 Answers
            10






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            92














            Yes. Generally two smaller less complex applications are much easier to maintain than a single large one.



            However, you get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this orchestration can go wrong in various ways, even though every application might function perfectly. Having a million tiny applications has its own special problems.



            A monolithic application is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. It's only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y".






            share|improve this answer





















            • 6





              Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.

              – JimmyJames
              Mar 12 at 14:31






            • 63





              "Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.

              – Doc Brown
              Mar 12 at 16:29






            • 10





              "Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.

              – Voo
              Mar 12 at 17:46








            • 11





              @Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.

              – Voo
              Mar 12 at 18:33








            • 9





              I think the discussion here can be summarized with 2 statements: 1) Splitting an app itself does not make an app more maintainable - on the contrary, it provides another possible point of failure 2) Splitting an app forces you to think about where to split it, which provides an advantage compared to a monolith where that has never been done.

              – R. Schmitz
              Mar 13 at 11:22
















            92














            Yes. Generally two smaller less complex applications are much easier to maintain than a single large one.



            However, you get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this orchestration can go wrong in various ways, even though every application might function perfectly. Having a million tiny applications has its own special problems.



            A monolithic application is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. It's only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y".






            share|improve this answer





















            • 6





              Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.

              – JimmyJames
              Mar 12 at 14:31






            • 63





              "Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.

              – Doc Brown
              Mar 12 at 16:29






            • 10





              "Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.

              – Voo
              Mar 12 at 17:46








            • 11





              @Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.

              – Voo
              Mar 12 at 18:33








            • 9





              I think the discussion here can be summarized with 2 statements: 1) Splitting an app itself does not make an app more maintainable - on the contrary, it provides another possible point of failure 2) Splitting an app forces you to think about where to split it, which provides an advantage compared to a monolith where that has never been done.

              – R. Schmitz
              Mar 13 at 11:22














            92












            92








            92







            Yes. Generally two smaller less complex applications are much easier to maintain than a single large one.



            However, you get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this orchestration can go wrong in various ways, even though every application might function perfectly. Having a million tiny applications has its own special problems.



            A monolithic application is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. It's only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y".






            share|improve this answer















            Yes. Generally two smaller less complex applications are much easier to maintain than a single large one.



            However, you get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this orchestration can go wrong in various ways, even though every application might function perfectly. Having a million tiny applications has its own special problems.



            A monolithic application is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. It's only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y".







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Mar 15 at 9:01









            Peter Mortensen

            1,11521114




            1,11521114










            answered Mar 12 at 11:46









            EwanEwan

            42k33592




            42k33592








            • 6





              Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.

              – JimmyJames
              Mar 12 at 14:31






            • 63





              "Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.

              – Doc Brown
              Mar 12 at 16:29






            • 10





              "Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.

              – Voo
              Mar 12 at 17:46








            • 11





              @Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.

              – Voo
              Mar 12 at 18:33








            • 9





              I think the discussion here can be summarized with 2 statements: 1) Splitting an app itself does not make an app more maintainable - on the contrary, it provides another possible point of failure 2) Splitting an app forces you to think about where to split it, which provides an advantage compared to a monolith where that has never been done.

              – R. Schmitz
              Mar 13 at 11:22














            • 6





              Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.

              – JimmyJames
              Mar 12 at 14:31






            • 63





              "Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.

              – Doc Brown
              Mar 12 at 16:29






            • 10





              "Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.

              – Voo
              Mar 12 at 17:46








            • 11





              @Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.

              – Voo
              Mar 12 at 18:33








            • 9





              I think the discussion here can be summarized with 2 statements: 1) Splitting an app itself does not make an app more maintainable - on the contrary, it provides another possible point of failure 2) Splitting an app forces you to think about where to split it, which provides an advantage compared to a monolith where that has never been done.

              – R. Schmitz
              Mar 13 at 11:22








            6




            6





            Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.

            – JimmyJames
            Mar 12 at 14:31





            Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.

            – JimmyJames
            Mar 12 at 14:31




            63




            63





            "Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.

            – Doc Brown
            Mar 12 at 16:29





            "Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.

            – Doc Brown
            Mar 12 at 16:29




            10




            10





            "Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.

            – Voo
            Mar 12 at 17:46







            "Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.

            – Voo
            Mar 12 at 17:46






            11




            11





            @Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.

            – Voo
            Mar 12 at 18:33







            @Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.

            – Voo
            Mar 12 at 18:33






            9




            9





            I think the discussion here can be summarized with 2 statements: 1) Splitting an app itself does not make an app more maintainable - on the contrary, it provides another possible point of failure 2) Splitting an app forces you to think about where to split it, which provides an advantage compared to a monolith where that has never been done.

            – R. Schmitz
            Mar 13 at 11:22





            I think the discussion here can be summarized with 2 statements: 1) Splitting an app itself does not make an app more maintainable - on the contrary, it provides another possible point of failure 2) Splitting an app forces you to think about where to split it, which provides an advantage compared to a monolith where that has never been done.

            – R. Schmitz
            Mar 13 at 11:22













            48















            Does splitting a potentially monolithic application into several smaller ones help prevent bugs




            Things are seldom that simple in reality.



            Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.



            However,




            • even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app


            • as Ewan already mentioned, the interaction of several components introduce additional risks and bugs. And debugging an application system with complex interprocess communication can be significantly harder than debugging a single-process application



            This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are, and how those interfaces are used.



            In short, this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.




            why do programs tend to be monolithic




            Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).




            would they not be easier to maintain




            "Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.






            share|improve this answer





















            • 3





              w.r.t. your last sentence, Conway's law says that system structure tends to mimic org. structure: devs/teams are more familiar with some parts than others, so whilst fixes/improvements should happen in the most relevant part, it may be easier for a dev to hack it into "their" parts rather than (a) learn how that other part works or (b) work with someone more familiar with that part. This is related to the "seams" @TKK mentions, and how difficult it can be to find and enforce "correct"/simple ones.

              – Warbo
              Mar 13 at 16:35
















            48















            Does splitting a potentially monolithic application into several smaller ones help prevent bugs




            Things are seldom that simple in reality.



            Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.



            However,




            • even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app


            • as Ewan already mentioned, the interaction of several components introduce additional risks and bugs. And debugging an application system with complex interprocess communication can be significantly harder than debugging a single-process application



            This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are, and how those interfaces are used.



            In short, this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.




            why do programs tend to be monolithic




            Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).




            would they not be easier to maintain




            "Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.






            share|improve this answer





















            • 3





              w.r.t. your last sentence, Conway's law says that system structure tends to mimic org. structure: devs/teams are more familiar with some parts than others, so whilst fixes/improvements should happen in the most relevant part, it may be easier for a dev to hack it into "their" parts rather than (a) learn how that other part works or (b) work with someone more familiar with that part. This is related to the "seams" @TKK mentions, and how difficult it can be to find and enforce "correct"/simple ones.

              – Warbo
              Mar 13 at 16:35














            48












            48








            48








            Does splitting a potentially monolithic application into several smaller ones help prevent bugs




            Things are seldom that simple in reality.



            Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.



            However,




            • even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app


            • as Ewan already mentioned, the interaction of several components introduce additional risks and bugs. And debugging an application system with complex interprocess communication can be significantly harder than debugging a single-process application



            This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are, and how those interfaces are used.



            In short, this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.




            why do programs tend to be monolithic




            Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).




            would they not be easier to maintain




            "Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.






            share|improve this answer
















            Does splitting a potentially monolithic application into several smaller ones help prevent bugs




            Things are seldom that simple in reality.



            Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.



            However,




            • even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app


            • as Ewan already mentioned, the interaction of several components introduce additional risks and bugs. And debugging an application system with complex interprocess communication can be significantly harder than debugging a single-process application



            This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are, and how those interfaces are used.



            In short, this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.




            why do programs tend to be monolithic




            Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).




            would they not be easier to maintain




            "Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Mar 14 at 6:28

























            answered Mar 12 at 12:35









            Doc BrownDoc Brown

            136k23250402




            136k23250402








            • 3





              w.r.t. your last sentence, Conway's law says that system structure tends to mimic org. structure: devs/teams are more familiar with some parts than others, so whilst fixes/improvements should happen in the most relevant part, it may be easier for a dev to hack it into "their" parts rather than (a) learn how that other part works or (b) work with someone more familiar with that part. This is related to the "seams" @TKK mentions, and how difficult it can be to find and enforce "correct"/simple ones.

              – Warbo
              Mar 13 at 16:35














            • 3





              w.r.t. your last sentence, Conway's law says that system structure tends to mimic org. structure: devs/teams are more familiar with some parts than others, so whilst fixes/improvements should happen in the most relevant part, it may be easier for a dev to hack it into "their" parts rather than (a) learn how that other part works or (b) work with someone more familiar with that part. This is related to the "seams" @TKK mentions, and how difficult it can be to find and enforce "correct"/simple ones.

              – Warbo
              Mar 13 at 16:35








            3




            3





            w.r.t. your last sentence, Conway's law says that system structure tends to mimic org. structure: devs/teams are more familiar with some parts than others, so whilst fixes/improvements should happen in the most relevant part, it may be easier for a dev to hack it into "their" parts rather than (a) learn how that other part works or (b) work with someone more familiar with that part. This is related to the "seams" @TKK mentions, and how difficult it can be to find and enforce "correct"/simple ones.

            – Warbo
            Mar 13 at 16:35





            w.r.t. your last sentence, Conway's law says that system structure tends to mimic org. structure: devs/teams are more familiar with some parts than others, so whilst fixes/improvements should happen in the most relevant part, it may be easier for a dev to hack it into "their" parts rather than (a) learn how that other part works or (b) work with someone more familiar with that part. This is related to the "seams" @TKK mentions, and how difficult it can be to find and enforce "correct"/simple ones.

            – Warbo
            Mar 13 at 16:35











            34














            I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.



            Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.



            You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.



            What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.






            share|improve this answer
























            • But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?

              – The Great Duck
              Mar 13 at 2:07








            • 3





              @TheGreatDuck That sounds like it would also eliminate a non-trivial amount of users who don't like having to switch between different applications. ;) But yes, eliminating features will generally lead to simpler code. Eliminate spell-checking and you will remove the possibility of having spell-checking bugs. It's just rarely done because the feature was added because someone wanted it.

              – Odalrick
              Mar 13 at 10:24








            • 1





              @TheGreatDuck Surely the design of the UX should come before any architectural decisions. There's no point having the best designed architecture if nobody uses your program. First decide what you want to build and based on the decide on the technical details. If two separate applications is preferred, go for it. You can still share a lot of code via shared libraries though.

              – Voo
              Mar 14 at 7:29











            • Is it really true to say the the complexity of the system is due to the tight coupling of the parts? I would want to say that the total complexity increase if you partition your system as you introduce indirection and communication, although the complexity of the specific individual components are isolated in a bounded state of more limited complexity.

              – Alex
              Mar 14 at 22:22











            • @Alex I think the confusion might stem from a misunderstanding what coupling means in this context. The wiki article is not particularly great, but gives some idea. Low coupling basically means having modules that have clear responsibilities and bounds. High coupling leads to code where if you want to change one thing you'll have to fix code in lots of other parts, including other modules.

              – Voo
              Mar 15 at 8:12
















            34














            I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.



            Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.



            You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.



            What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.






            share|improve this answer
























            • But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?

              – The Great Duck
              Mar 13 at 2:07








            • 3





              @TheGreatDuck That sounds like it would also eliminate a non-trivial amount of users who don't like having to switch between different applications. ;) But yes, eliminating features will generally lead to simpler code. Eliminate spell-checking and you will remove the possibility of having spell-checking bugs. It's just rarely done because the feature was added because someone wanted it.

              – Odalrick
              Mar 13 at 10:24








            • 1





              @TheGreatDuck Surely the design of the UX should come before any architectural decisions. There's no point having the best designed architecture if nobody uses your program. First decide what you want to build and based on the decide on the technical details. If two separate applications is preferred, go for it. You can still share a lot of code via shared libraries though.

              – Voo
              Mar 14 at 7:29











            • Is it really true to say the the complexity of the system is due to the tight coupling of the parts? I would want to say that the total complexity increase if you partition your system as you introduce indirection and communication, although the complexity of the specific individual components are isolated in a bounded state of more limited complexity.

              – Alex
              Mar 14 at 22:22











            • @Alex I think the confusion might stem from a misunderstanding what coupling means in this context. The wiki article is not particularly great, but gives some idea. Low coupling basically means having modules that have clear responsibilities and bounds. High coupling leads to code where if you want to change one thing you'll have to fix code in lots of other parts, including other modules.

              – Voo
              Mar 15 at 8:12














            34












            34








            34







            I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.



            Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.



            You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.



            What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.






            share|improve this answer













            I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.



            Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.



            You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.



            What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Mar 12 at 17:53









            VooVoo

            684611




            684611













            • But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?

              – The Great Duck
              Mar 13 at 2:07








            • 3





              @TheGreatDuck That sounds like it would also eliminate a non-trivial amount of users who don't like having to switch between different applications. ;) But yes, eliminating features will generally lead to simpler code. Eliminate spell-checking and you will remove the possibility of having spell-checking bugs. It's just rarely done because the feature was added because someone wanted it.

              – Odalrick
              Mar 13 at 10:24








            • 1





              @TheGreatDuck Surely the design of the UX should come before any architectural decisions. There's no point having the best designed architecture if nobody uses your program. First decide what you want to build and based on the decide on the technical details. If two separate applications is preferred, go for it. You can still share a lot of code via shared libraries though.

              – Voo
              Mar 14 at 7:29











            • Is it really true to say the the complexity of the system is due to the tight coupling of the parts? I would want to say that the total complexity increase if you partition your system as you introduce indirection and communication, although the complexity of the specific individual components are isolated in a bounded state of more limited complexity.

              – Alex
              Mar 14 at 22:22











            • @Alex I think the confusion might stem from a misunderstanding what coupling means in this context. The wiki article is not particularly great, but gives some idea. Low coupling basically means having modules that have clear responsibilities and bounds. High coupling leads to code where if you want to change one thing you'll have to fix code in lots of other parts, including other modules.

              – Voo
              Mar 15 at 8:12



















            • But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?

              – The Great Duck
              Mar 13 at 2:07








            • 3





              @TheGreatDuck That sounds like it would also eliminate a non-trivial amount of users who don't like having to switch between different applications. ;) But yes, eliminating features will generally lead to simpler code. Eliminate spell-checking and you will remove the possibility of having spell-checking bugs. It's just rarely done because the feature was added because someone wanted it.

              – Odalrick
              Mar 13 at 10:24








            • 1





              @TheGreatDuck Surely the design of the UX should come before any architectural decisions. There's no point having the best designed architecture if nobody uses your program. First decide what you want to build and based on the decide on the technical details. If two separate applications is preferred, go for it. You can still share a lot of code via shared libraries though.

              – Voo
              Mar 14 at 7:29











            • Is it really true to say the the complexity of the system is due to the tight coupling of the parts? I would want to say that the total complexity increase if you partition your system as you introduce indirection and communication, although the complexity of the specific individual components are isolated in a bounded state of more limited complexity.

              – Alex
              Mar 14 at 22:22











            • @Alex I think the confusion might stem from a misunderstanding what coupling means in this context. The wiki article is not particularly great, but gives some idea. Low coupling basically means having modules that have clear responsibilities and bounds. High coupling leads to code where if you want to change one thing you'll have to fix code in lots of other parts, including other modules.

              – Voo
              Mar 15 at 8:12

















            But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?

            – The Great Duck
            Mar 13 at 2:07







            But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?

            – The Great Duck
            Mar 13 at 2:07






            3




            3





            @TheGreatDuck That sounds like it would also eliminate a non-trivial amount of users who don't like having to switch between different applications. ;) But yes, eliminating features will generally lead to simpler code. Eliminate spell-checking and you will remove the possibility of having spell-checking bugs. It's just rarely done because the feature was added because someone wanted it.

            – Odalrick
            Mar 13 at 10:24







            @TheGreatDuck That sounds like it would also eliminate a non-trivial amount of users who don't like having to switch between different applications. ;) But yes, eliminating features will generally lead to simpler code. Eliminate spell-checking and you will remove the possibility of having spell-checking bugs. It's just rarely done because the feature was added because someone wanted it.

            – Odalrick
            Mar 13 at 10:24






            1




            1





            @TheGreatDuck Surely the design of the UX should come before any architectural decisions. There's no point having the best designed architecture if nobody uses your program. First decide what you want to build and based on the decide on the technical details. If two separate applications is preferred, go for it. You can still share a lot of code via shared libraries though.

            – Voo
            Mar 14 at 7:29





            @TheGreatDuck Surely the design of the UX should come before any architectural decisions. There's no point having the best designed architecture if nobody uses your program. First decide what you want to build and based on the decide on the technical details. If two separate applications is preferred, go for it. You can still share a lot of code via shared libraries though.

            – Voo
            Mar 14 at 7:29













            Is it really true to say the the complexity of the system is due to the tight coupling of the parts? I would want to say that the total complexity increase if you partition your system as you introduce indirection and communication, although the complexity of the specific individual components are isolated in a bounded state of more limited complexity.

            – Alex
            Mar 14 at 22:22





            Is it really true to say the the complexity of the system is due to the tight coupling of the parts? I would want to say that the total complexity increase if you partition your system as you introduce indirection and communication, although the complexity of the specific individual components are isolated in a bounded state of more limited complexity.

            – Alex
            Mar 14 at 22:22













            @Alex I think the confusion might stem from a misunderstanding what coupling means in this context. The wiki article is not particularly great, but gives some idea. Low coupling basically means having modules that have clear responsibilities and bounds. High coupling leads to code where if you want to change one thing you'll have to fix code in lots of other parts, including other modules.

            – Voo
            Mar 15 at 8:12





            @Alex I think the confusion might stem from a misunderstanding what coupling means in this context. The wiki article is not particularly great, but gives some idea. Low coupling basically means having modules that have clear responsibilities and bounds. High coupling leads to code where if you want to change one thing you'll have to fix code in lots of other parts, including other modules.

            – Voo
            Mar 15 at 8:12











            12














            Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.






            share|improve this answer
























            • great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.

              – Trevor Boyd Smith
              Mar 12 at 17:32
















            12














            Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.






            share|improve this answer
























            • great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.

              – Trevor Boyd Smith
              Mar 12 at 17:32














            12












            12








            12







            Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.






            share|improve this answer













            Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Mar 12 at 15:37









            TKKTKK

            446111




            446111













            • great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.

              – Trevor Boyd Smith
              Mar 12 at 17:32



















            • great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.

              – Trevor Boyd Smith
              Mar 12 at 17:32

















            great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.

            – Trevor Boyd Smith
            Mar 12 at 17:32





            great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.

            – Trevor Boyd Smith
            Mar 12 at 17:32











            10














            It's important to remember that correlation is not causation.



            Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)



            But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)



            Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.



            To quote C.A.R. Hoare:




            There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.




            If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:




            The first method is far more difficult.




            And later in the same source (the 1980 Turing Award Lecture):




            The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.







            share|improve this answer




























              10














              It's important to remember that correlation is not causation.



              Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)



              But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)



              Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.



              To quote C.A.R. Hoare:




              There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.




              If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:




              The first method is far more difficult.




              And later in the same source (the 1980 Turing Award Lecture):




              The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.







              share|improve this answer


























                10












                10








                10







                It's important to remember that correlation is not causation.



                Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)



                But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)



                Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.



                To quote C.A.R. Hoare:




                There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.




                If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:




                The first method is far more difficult.




                And later in the same source (the 1980 Turing Award Lecture):




                The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.







                share|improve this answer













                It's important to remember that correlation is not causation.



                Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)



                But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)



                Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.



                To quote C.A.R. Hoare:




                There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.




                If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:




                The first method is far more difficult.




                And later in the same source (the 1980 Turing Award Lecture):




                The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.








                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Mar 12 at 18:27









                Daniel PrydenDaniel Pryden

                3,09811721




                3,09811721























                    4














                    This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.



                    Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.



                    A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.



                    The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.



                    When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.






                    share|improve this answer




























                      4














                      This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.



                      Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.



                      A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.



                      The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.



                      When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.






                      share|improve this answer


























                        4












                        4








                        4







                        This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.



                        Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.



                        A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.



                        The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.



                        When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.






                        share|improve this answer













                        This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.



                        Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.



                        A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.



                        The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.



                        When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Mar 13 at 1:49









                        MarvWMarvW

                        591




                        591























                            4














                            No. it does not make it easier to maintain. If anything welcome to more problems.



                            Why?




                            • The programs are not orthogonal they need to preserve each others work in so far as is reasonable, which implies a common understanding.

                            • Much code of both programs are identical. Are you maintaining a common shared library, or maintaining two separate copies?

                            • You now have two development teams. How are they communicating?


                            • You now have two products that need:




                              • a common UI style, interaction mechanisms, etc... So you now have design problems. (How are the dev teams communicating again?)

                              • backward compatibility (can modeller v1 be imported into animator v3?)

                              • cloud/network integration (if its a feature) now has to be updated across twice as many products.




                            • You now have three consumer markets: Modellers, Animators and Modeller Animators




                              • They will have conflicting priorities

                              • They will have conflicting support needs

                              • They will have conflicting usage styles



                            • Do the Modeller Animators have to open two separate applications to work on the same file? Is there a third application with both functions, does one application load the functions of the other?

                            • etc...


                            That being said smaller code bases equal easier to maintain at the application level, you're just not going to get a free lunch. This is the same problem at the heart of Micro-Service/Any-Modular-Architecture. Its not a panacea, maintenance difficulty at the application level is traded for maintenance difficulties at the orchestration level. Those issues are still issues, they just aren't in the code base any more, they will need to be either avoided, or solved.



                            If solving the problem at the orchestration level is simpler then solving it at each application level then it makes sense to split it into two code bases and deal with the orchestration issues.



                            Otherwise no, just do not do it, you would be better served by improving the internal modularity of the application itself. Push out sections of code into cohesive and easier to maintain libraries that the application acts as a plugin to. After all a monolith is just the orchestration layer of a library landscape.






                            share|improve this answer




























                              4














                              No. it does not make it easier to maintain. If anything welcome to more problems.



                              Why?




                              • The programs are not orthogonal they need to preserve each others work in so far as is reasonable, which implies a common understanding.

                              • Much code of both programs are identical. Are you maintaining a common shared library, or maintaining two separate copies?

                              • You now have two development teams. How are they communicating?


                              • You now have two products that need:




                                • a common UI style, interaction mechanisms, etc... So you now have design problems. (How are the dev teams communicating again?)

                                • backward compatibility (can modeller v1 be imported into animator v3?)

                                • cloud/network integration (if its a feature) now has to be updated across twice as many products.




                              • You now have three consumer markets: Modellers, Animators and Modeller Animators




                                • They will have conflicting priorities

                                • They will have conflicting support needs

                                • They will have conflicting usage styles



                              • Do the Modeller Animators have to open two separate applications to work on the same file? Is there a third application with both functions, does one application load the functions of the other?

                              • etc...


                              That being said smaller code bases equal easier to maintain at the application level, you're just not going to get a free lunch. This is the same problem at the heart of Micro-Service/Any-Modular-Architecture. Its not a panacea, maintenance difficulty at the application level is traded for maintenance difficulties at the orchestration level. Those issues are still issues, they just aren't in the code base any more, they will need to be either avoided, or solved.



                              If solving the problem at the orchestration level is simpler then solving it at each application level then it makes sense to split it into two code bases and deal with the orchestration issues.



                              Otherwise no, just do not do it, you would be better served by improving the internal modularity of the application itself. Push out sections of code into cohesive and easier to maintain libraries that the application acts as a plugin to. After all a monolith is just the orchestration layer of a library landscape.






                              share|improve this answer


























                                4












                                4








                                4







                                No. it does not make it easier to maintain. If anything welcome to more problems.



                                Why?




                                • The programs are not orthogonal they need to preserve each others work in so far as is reasonable, which implies a common understanding.

                                • Much code of both programs are identical. Are you maintaining a common shared library, or maintaining two separate copies?

                                • You now have two development teams. How are they communicating?


                                • You now have two products that need:




                                  • a common UI style, interaction mechanisms, etc... So you now have design problems. (How are the dev teams communicating again?)

                                  • backward compatibility (can modeller v1 be imported into animator v3?)

                                  • cloud/network integration (if its a feature) now has to be updated across twice as many products.




                                • You now have three consumer markets: Modellers, Animators and Modeller Animators




                                  • They will have conflicting priorities

                                  • They will have conflicting support needs

                                  • They will have conflicting usage styles



                                • Do the Modeller Animators have to open two separate applications to work on the same file? Is there a third application with both functions, does one application load the functions of the other?

                                • etc...


                                That being said smaller code bases equal easier to maintain at the application level, you're just not going to get a free lunch. This is the same problem at the heart of Micro-Service/Any-Modular-Architecture. Its not a panacea, maintenance difficulty at the application level is traded for maintenance difficulties at the orchestration level. Those issues are still issues, they just aren't in the code base any more, they will need to be either avoided, or solved.



                                If solving the problem at the orchestration level is simpler then solving it at each application level then it makes sense to split it into two code bases and deal with the orchestration issues.



                                Otherwise no, just do not do it, you would be better served by improving the internal modularity of the application itself. Push out sections of code into cohesive and easier to maintain libraries that the application acts as a plugin to. After all a monolith is just the orchestration layer of a library landscape.






                                share|improve this answer













                                No. it does not make it easier to maintain. If anything welcome to more problems.



                                Why?




                                • The programs are not orthogonal they need to preserve each others work in so far as is reasonable, which implies a common understanding.

                                • Much code of both programs are identical. Are you maintaining a common shared library, or maintaining two separate copies?

                                • You now have two development teams. How are they communicating?


                                • You now have two products that need:




                                  • a common UI style, interaction mechanisms, etc... So you now have design problems. (How are the dev teams communicating again?)

                                  • backward compatibility (can modeller v1 be imported into animator v3?)

                                  • cloud/network integration (if its a feature) now has to be updated across twice as many products.




                                • You now have three consumer markets: Modellers, Animators and Modeller Animators




                                  • They will have conflicting priorities

                                  • They will have conflicting support needs

                                  • They will have conflicting usage styles



                                • Do the Modeller Animators have to open two separate applications to work on the same file? Is there a third application with both functions, does one application load the functions of the other?

                                • etc...


                                That being said smaller code bases equal easier to maintain at the application level, you're just not going to get a free lunch. This is the same problem at the heart of Micro-Service/Any-Modular-Architecture. Its not a panacea, maintenance difficulty at the application level is traded for maintenance difficulties at the orchestration level. Those issues are still issues, they just aren't in the code base any more, they will need to be either avoided, or solved.



                                If solving the problem at the orchestration level is simpler then solving it at each application level then it makes sense to split it into two code bases and deal with the orchestration issues.



                                Otherwise no, just do not do it, you would be better served by improving the internal modularity of the application itself. Push out sections of code into cohesive and easier to maintain libraries that the application acts as a plugin to. After all a monolith is just the orchestration layer of a library landscape.







                                share|improve this answer












                                share|improve this answer



                                share|improve this answer










                                answered Mar 13 at 3:17









                                Kain0_0Kain0_0

                                3,728417




                                3,728417























                                    3














                                    There were a lot of good answers but since there is almost a dead split I'll throw my hat into the ring too.



                                    In my experience as a software engineer, I have found this to not be a simple problem. It really depends on the size, scale, and purpose of the application. Older applications by virtue of the inertia required to change them, are generally monolithic as this was a common practice for a long time (Maya would qualify in this category). I assume you're talking about newer applications in general.



                                    In small enough applications that are more-or-less single concern the overhead required to maintain many separate parts generally exceeds the utility of having the separation. If it can be maintained by one person, it can probably be made monolithic without causing too many problems. The exception to this rule is when you have many different parts (a frontend, backend, perhaps some data layers in between) that are conveniently separated (logically).



                                    In very large even single concern applications splitting it up makes sense in my experience. You have the benefit of reducing a subset of the class of bugs possible in exchange for other (sometimes easier to solve) bugs. In general, you can also have teams of people working in isolation which improves productivity. Many applications these days however are split pretty finely, sometimes to their own detriment. I have also been on teams where the application was split across so many microservices unnecessarily that it introduced a lot of overhead when things stop talking to each other. Additionally, having to hold all of the knowledge of how each part talks to the other parts gets much harder with each successive split. There is a balance, and as you can tell by the answers here the way to do it isnt very clear, and there is really no standard in place.






                                    share|improve this answer








                                    New contributor




                                    CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.
















                                    • 2





                                      My first job as a programmer was as a millenium-bug programmer. The software I was working on was split into hundreds of little programs which all did a little part, strung together with batch files and using files to communicate state. It was a big mess, invented in a time where computers were slow, had little memory and storage was expensive. When I worked with it, the code was already 10-15 years old. Once we were done they asked my advice and my advice was to convert everything to a new monolithic application. They did and a year later I got a big thank you.

                                      – Pieter B
                                      Mar 13 at 17:26













                                    • @PieterB I have had a similar experience. "Cutting edge" tech is unfortunately a very large cargo cult in a lot of ways. Instead of choosing the best method for the job many companies will just follow whatever a FAANG is doing at the time without any question.

                                      – CL40
                                      Mar 13 at 17:33











                                    • and also: what may come out as a monolithic application once compiled, may be a very modular application, code wise.

                                      – Pieter B
                                      Mar 13 at 18:09
















                                    3














                                    There were a lot of good answers but since there is almost a dead split I'll throw my hat into the ring too.



                                    In my experience as a software engineer, I have found this to not be a simple problem. It really depends on the size, scale, and purpose of the application. Older applications by virtue of the inertia required to change them, are generally monolithic as this was a common practice for a long time (Maya would qualify in this category). I assume you're talking about newer applications in general.



                                    In small enough applications that are more-or-less single concern the overhead required to maintain many separate parts generally exceeds the utility of having the separation. If it can be maintained by one person, it can probably be made monolithic without causing too many problems. The exception to this rule is when you have many different parts (a frontend, backend, perhaps some data layers in between) that are conveniently separated (logically).



                                    In very large even single concern applications splitting it up makes sense in my experience. You have the benefit of reducing a subset of the class of bugs possible in exchange for other (sometimes easier to solve) bugs. In general, you can also have teams of people working in isolation which improves productivity. Many applications these days however are split pretty finely, sometimes to their own detriment. I have also been on teams where the application was split across so many microservices unnecessarily that it introduced a lot of overhead when things stop talking to each other. Additionally, having to hold all of the knowledge of how each part talks to the other parts gets much harder with each successive split. There is a balance, and as you can tell by the answers here the way to do it isnt very clear, and there is really no standard in place.






                                    share|improve this answer








                                    New contributor




                                    CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.
















                                    • 2





                                      My first job as a programmer was as a millenium-bug programmer. The software I was working on was split into hundreds of little programs which all did a little part, strung together with batch files and using files to communicate state. It was a big mess, invented in a time where computers were slow, had little memory and storage was expensive. When I worked with it, the code was already 10-15 years old. Once we were done they asked my advice and my advice was to convert everything to a new monolithic application. They did and a year later I got a big thank you.

                                      – Pieter B
                                      Mar 13 at 17:26













                                    • @PieterB I have had a similar experience. "Cutting edge" tech is unfortunately a very large cargo cult in a lot of ways. Instead of choosing the best method for the job many companies will just follow whatever a FAANG is doing at the time without any question.

                                      – CL40
                                      Mar 13 at 17:33











                                    • and also: what may come out as a monolithic application once compiled, may be a very modular application, code wise.

                                      – Pieter B
                                      Mar 13 at 18:09














                                    3












                                    3








                                    3







                                    There were a lot of good answers but since there is almost a dead split I'll throw my hat into the ring too.



                                    In my experience as a software engineer, I have found this to not be a simple problem. It really depends on the size, scale, and purpose of the application. Older applications by virtue of the inertia required to change them, are generally monolithic as this was a common practice for a long time (Maya would qualify in this category). I assume you're talking about newer applications in general.



                                    In small enough applications that are more-or-less single concern the overhead required to maintain many separate parts generally exceeds the utility of having the separation. If it can be maintained by one person, it can probably be made monolithic without causing too many problems. The exception to this rule is when you have many different parts (a frontend, backend, perhaps some data layers in between) that are conveniently separated (logically).



                                    In very large even single concern applications splitting it up makes sense in my experience. You have the benefit of reducing a subset of the class of bugs possible in exchange for other (sometimes easier to solve) bugs. In general, you can also have teams of people working in isolation which improves productivity. Many applications these days however are split pretty finely, sometimes to their own detriment. I have also been on teams where the application was split across so many microservices unnecessarily that it introduced a lot of overhead when things stop talking to each other. Additionally, having to hold all of the knowledge of how each part talks to the other parts gets much harder with each successive split. There is a balance, and as you can tell by the answers here the way to do it isnt very clear, and there is really no standard in place.






                                    share|improve this answer








                                    New contributor




                                    CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.










                                    There were a lot of good answers but since there is almost a dead split I'll throw my hat into the ring too.



                                    In my experience as a software engineer, I have found this to not be a simple problem. It really depends on the size, scale, and purpose of the application. Older applications by virtue of the inertia required to change them, are generally monolithic as this was a common practice for a long time (Maya would qualify in this category). I assume you're talking about newer applications in general.



                                    In small enough applications that are more-or-less single concern the overhead required to maintain many separate parts generally exceeds the utility of having the separation. If it can be maintained by one person, it can probably be made monolithic without causing too many problems. The exception to this rule is when you have many different parts (a frontend, backend, perhaps some data layers in between) that are conveniently separated (logically).



                                    In very large even single concern applications splitting it up makes sense in my experience. You have the benefit of reducing a subset of the class of bugs possible in exchange for other (sometimes easier to solve) bugs. In general, you can also have teams of people working in isolation which improves productivity. Many applications these days however are split pretty finely, sometimes to their own detriment. I have also been on teams where the application was split across so many microservices unnecessarily that it introduced a lot of overhead when things stop talking to each other. Additionally, having to hold all of the knowledge of how each part talks to the other parts gets much harder with each successive split. There is a balance, and as you can tell by the answers here the way to do it isnt very clear, and there is really no standard in place.







                                    share|improve this answer








                                    New contributor




                                    CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    share|improve this answer



                                    share|improve this answer






                                    New contributor




                                    CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    answered Mar 13 at 17:16









                                    CL40CL40

                                    1391




                                    1391




                                    New contributor




                                    CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.





                                    New contributor





                                    CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.






                                    CL40 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.








                                    • 2





                                      My first job as a programmer was as a millenium-bug programmer. The software I was working on was split into hundreds of little programs which all did a little part, strung together with batch files and using files to communicate state. It was a big mess, invented in a time where computers were slow, had little memory and storage was expensive. When I worked with it, the code was already 10-15 years old. Once we were done they asked my advice and my advice was to convert everything to a new monolithic application. They did and a year later I got a big thank you.

                                      – Pieter B
                                      Mar 13 at 17:26













                                    • @PieterB I have had a similar experience. "Cutting edge" tech is unfortunately a very large cargo cult in a lot of ways. Instead of choosing the best method for the job many companies will just follow whatever a FAANG is doing at the time without any question.

                                      – CL40
                                      Mar 13 at 17:33











                                    • and also: what may come out as a monolithic application once compiled, may be a very modular application, code wise.

                                      – Pieter B
                                      Mar 13 at 18:09














                                    • 2





                                      My first job as a programmer was as a millenium-bug programmer. The software I was working on was split into hundreds of little programs which all did a little part, strung together with batch files and using files to communicate state. It was a big mess, invented in a time where computers were slow, had little memory and storage was expensive. When I worked with it, the code was already 10-15 years old. Once we were done they asked my advice and my advice was to convert everything to a new monolithic application. They did and a year later I got a big thank you.

                                      – Pieter B
                                      Mar 13 at 17:26













                                    • @PieterB I have had a similar experience. "Cutting edge" tech is unfortunately a very large cargo cult in a lot of ways. Instead of choosing the best method for the job many companies will just follow whatever a FAANG is doing at the time without any question.

                                      – CL40
                                      Mar 13 at 17:33











                                    • and also: what may come out as a monolithic application once compiled, may be a very modular application, code wise.

                                      – Pieter B
                                      Mar 13 at 18:09








                                    2




                                    2





                                    My first job as a programmer was as a millenium-bug programmer. The software I was working on was split into hundreds of little programs which all did a little part, strung together with batch files and using files to communicate state. It was a big mess, invented in a time where computers were slow, had little memory and storage was expensive. When I worked with it, the code was already 10-15 years old. Once we were done they asked my advice and my advice was to convert everything to a new monolithic application. They did and a year later I got a big thank you.

                                    – Pieter B
                                    Mar 13 at 17:26







                                    My first job as a programmer was as a millenium-bug programmer. The software I was working on was split into hundreds of little programs which all did a little part, strung together with batch files and using files to communicate state. It was a big mess, invented in a time where computers were slow, had little memory and storage was expensive. When I worked with it, the code was already 10-15 years old. Once we were done they asked my advice and my advice was to convert everything to a new monolithic application. They did and a year later I got a big thank you.

                                    – Pieter B
                                    Mar 13 at 17:26















                                    @PieterB I have had a similar experience. "Cutting edge" tech is unfortunately a very large cargo cult in a lot of ways. Instead of choosing the best method for the job many companies will just follow whatever a FAANG is doing at the time without any question.

                                    – CL40
                                    Mar 13 at 17:33





                                    @PieterB I have had a similar experience. "Cutting edge" tech is unfortunately a very large cargo cult in a lot of ways. Instead of choosing the best method for the job many companies will just follow whatever a FAANG is doing at the time without any question.

                                    – CL40
                                    Mar 13 at 17:33













                                    and also: what may come out as a monolithic application once compiled, may be a very modular application, code wise.

                                    – Pieter B
                                    Mar 13 at 18:09





                                    and also: what may come out as a monolithic application once compiled, may be a very modular application, code wise.

                                    – Pieter B
                                    Mar 13 at 18:09











                                    1














                                    For UI apps it is unlikely to decrease overall amount of bugs but will shift balance of bug mix toward problems caused by communication.



                                    Speaking of user facing UI applications/sites - users are extremely non-patient and demand low response time. This makes any communication delays into bugs. As result one will trade potential decrease of bugs due to decreased complexity of a single component with very hard bugs and timing requirement of cross-process/cross-machine communication.



                                    If units of the data the program deals with are large (i.e. images) then any cross-process delays would be longer and harder to eliminate - something like "apply transformation to 10mb image" will instantly gain +20mb of disk/network IO in addition to 2 conversion from in-memory format to serializabe format and back. There is really not much you can do to hide time needed to do so from the user.



                                    Additionally any communication and especially disk IO is subject to AntiVirus/Firewall checks - this inevitably adds another layer of hard to reproduce bugs and even more delays.



                                    Splitting monolithic "program" shines where communication delays are not critical or already unavoidable




                                    • parallelizable bulk processing of information where you can trade small extra delays for significant improvement of individual steps (sometimes eliminating need for custom components by using off-the-shelf once). Small individual step footprint may let you use multiple cheaper machines instead of single expensive one for example.

                                    • splitting monolithic services into less coupled micro-services - calling several services in parallel instead of one most likely will not add extra delays (may even decrease overall time if each individual one is faster and there are no dependencies)

                                    • moving out operations that users expect to take long time - rendering complicated 3d scene/movie, computing complex metrics about data,...

                                    • all sorts of "auto-complete", "spell-check", and other optional aids can and often made to be external - most obvious example is browser's url auto-suggestions where your input send to external service (search engine) all the time.


                                    Note that this applies to desktop apps as well as web sites - user facing portion of the program tends to be "monolithic" - all user interaction code tied to single piece of data is usually running in a single process (it is not unusual to split processes on per-piece-of-data basis like HTML page or an image but it is orthogonal to this question). Even for most basic site with user input you'll see validation logic running on the client side even if making it server side would be more modular and reduce complexity/code duplication.






                                    share|improve this answer








                                    New contributor




                                    Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.

























                                      1














                                      For UI apps it is unlikely to decrease overall amount of bugs but will shift balance of bug mix toward problems caused by communication.



                                      Speaking of user facing UI applications/sites - users are extremely non-patient and demand low response time. This makes any communication delays into bugs. As result one will trade potential decrease of bugs due to decreased complexity of a single component with very hard bugs and timing requirement of cross-process/cross-machine communication.



                                      If units of the data the program deals with are large (i.e. images) then any cross-process delays would be longer and harder to eliminate - something like "apply transformation to 10mb image" will instantly gain +20mb of disk/network IO in addition to 2 conversion from in-memory format to serializabe format and back. There is really not much you can do to hide time needed to do so from the user.



                                      Additionally any communication and especially disk IO is subject to AntiVirus/Firewall checks - this inevitably adds another layer of hard to reproduce bugs and even more delays.



                                      Splitting monolithic "program" shines where communication delays are not critical or already unavoidable




                                      • parallelizable bulk processing of information where you can trade small extra delays for significant improvement of individual steps (sometimes eliminating need for custom components by using off-the-shelf once). Small individual step footprint may let you use multiple cheaper machines instead of single expensive one for example.

                                      • splitting monolithic services into less coupled micro-services - calling several services in parallel instead of one most likely will not add extra delays (may even decrease overall time if each individual one is faster and there are no dependencies)

                                      • moving out operations that users expect to take long time - rendering complicated 3d scene/movie, computing complex metrics about data,...

                                      • all sorts of "auto-complete", "spell-check", and other optional aids can and often made to be external - most obvious example is browser's url auto-suggestions where your input send to external service (search engine) all the time.


                                      Note that this applies to desktop apps as well as web sites - user facing portion of the program tends to be "monolithic" - all user interaction code tied to single piece of data is usually running in a single process (it is not unusual to split processes on per-piece-of-data basis like HTML page or an image but it is orthogonal to this question). Even for most basic site with user input you'll see validation logic running on the client side even if making it server side would be more modular and reduce complexity/code duplication.






                                      share|improve this answer








                                      New contributor




                                      Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.























                                        1












                                        1








                                        1







                                        For UI apps it is unlikely to decrease overall amount of bugs but will shift balance of bug mix toward problems caused by communication.



                                        Speaking of user facing UI applications/sites - users are extremely non-patient and demand low response time. This makes any communication delays into bugs. As result one will trade potential decrease of bugs due to decreased complexity of a single component with very hard bugs and timing requirement of cross-process/cross-machine communication.



                                        If units of the data the program deals with are large (i.e. images) then any cross-process delays would be longer and harder to eliminate - something like "apply transformation to 10mb image" will instantly gain +20mb of disk/network IO in addition to 2 conversion from in-memory format to serializabe format and back. There is really not much you can do to hide time needed to do so from the user.



                                        Additionally any communication and especially disk IO is subject to AntiVirus/Firewall checks - this inevitably adds another layer of hard to reproduce bugs and even more delays.



                                        Splitting monolithic "program" shines where communication delays are not critical or already unavoidable




                                        • parallelizable bulk processing of information where you can trade small extra delays for significant improvement of individual steps (sometimes eliminating need for custom components by using off-the-shelf once). Small individual step footprint may let you use multiple cheaper machines instead of single expensive one for example.

                                        • splitting monolithic services into less coupled micro-services - calling several services in parallel instead of one most likely will not add extra delays (may even decrease overall time if each individual one is faster and there are no dependencies)

                                        • moving out operations that users expect to take long time - rendering complicated 3d scene/movie, computing complex metrics about data,...

                                        • all sorts of "auto-complete", "spell-check", and other optional aids can and often made to be external - most obvious example is browser's url auto-suggestions where your input send to external service (search engine) all the time.


                                        Note that this applies to desktop apps as well as web sites - user facing portion of the program tends to be "monolithic" - all user interaction code tied to single piece of data is usually running in a single process (it is not unusual to split processes on per-piece-of-data basis like HTML page or an image but it is orthogonal to this question). Even for most basic site with user input you'll see validation logic running on the client side even if making it server side would be more modular and reduce complexity/code duplication.






                                        share|improve this answer








                                        New contributor




                                        Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                        Check out our Code of Conduct.










                                        For UI apps it is unlikely to decrease overall amount of bugs but will shift balance of bug mix toward problems caused by communication.



                                        Speaking of user facing UI applications/sites - users are extremely non-patient and demand low response time. This makes any communication delays into bugs. As result one will trade potential decrease of bugs due to decreased complexity of a single component with very hard bugs and timing requirement of cross-process/cross-machine communication.



                                        If units of the data the program deals with are large (i.e. images) then any cross-process delays would be longer and harder to eliminate - something like "apply transformation to 10mb image" will instantly gain +20mb of disk/network IO in addition to 2 conversion from in-memory format to serializabe format and back. There is really not much you can do to hide time needed to do so from the user.



                                        Additionally any communication and especially disk IO is subject to AntiVirus/Firewall checks - this inevitably adds another layer of hard to reproduce bugs and even more delays.



                                        Splitting monolithic "program" shines where communication delays are not critical or already unavoidable




                                        • parallelizable bulk processing of information where you can trade small extra delays for significant improvement of individual steps (sometimes eliminating need for custom components by using off-the-shelf once). Small individual step footprint may let you use multiple cheaper machines instead of single expensive one for example.

                                        • splitting monolithic services into less coupled micro-services - calling several services in parallel instead of one most likely will not add extra delays (may even decrease overall time if each individual one is faster and there are no dependencies)

                                        • moving out operations that users expect to take long time - rendering complicated 3d scene/movie, computing complex metrics about data,...

                                        • all sorts of "auto-complete", "spell-check", and other optional aids can and often made to be external - most obvious example is browser's url auto-suggestions where your input send to external service (search engine) all the time.


                                        Note that this applies to desktop apps as well as web sites - user facing portion of the program tends to be "monolithic" - all user interaction code tied to single piece of data is usually running in a single process (it is not unusual to split processes on per-piece-of-data basis like HTML page or an image but it is orthogonal to this question). Even for most basic site with user input you'll see validation logic running on the client side even if making it server side would be more modular and reduce complexity/code duplication.







                                        share|improve this answer








                                        New contributor




                                        Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                        Check out our Code of Conduct.









                                        share|improve this answer



                                        share|improve this answer






                                        New contributor




                                        Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                        Check out our Code of Conduct.









                                        answered Mar 15 at 6:58









                                        Alexei LevenkovAlexei Levenkov

                                        1115




                                        1115




                                        New contributor




                                        Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                        Check out our Code of Conduct.





                                        New contributor





                                        Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                        Check out our Code of Conduct.






                                        Alexei Levenkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                        Check out our Code of Conduct.























                                            0















                                            Does [it] help prevent bugs?




                                            Prevent? Well, no, not really.





                                            • It helps detect bugs.

                                              Namely all the bugs you didn't even know you had, that you only discovered when you tried to split that whole mess into smaller parts. So, in a way, it prevented those bugs from making their appearance in production — but the bugs were already there.


                                            • It helps reduce the impact of bugs.

                                              Bugs in monolithic applications have the potential to bring down the whole system and keep the user from interacting with your application at all. If you split that application into components, most bugs will —by design— only affect one of the components.


                                            • It creates a scenario for new bugs.

                                              If you want to keep the user experience the same, you will need to include new logic for all those components to communicate (via REST services, via OS system calls, what have you) so they can interact seamlessly from the user's POV.

                                              As a simple example: your monolithic app let users create a model and animate it without leaving the app. You split the app in two components: modeling and animation. Now your users have to export the modeling app's model to a file, then find the file and then open it with the animation app... Let's face it, some users are not gonna like that, so you have to include new logic for the modeling app to export the file and automatically launch the animation app and make it open the file. And this new logic, as simple as it may be, can have a number of bugs regarding data serialization, file access and permissions, users changing the installation path of the apps, etc.


                                            • It is the perfect excuse to apply much needed refactoring.

                                              When you decide to split a monolithic app into smaller components, you (hopefully) do so with a lot more knowledge and experience about the system than when it was first designed, and thanks to that you can apply a number of refactors to make the code cleaner, simpler, more efficient, more resilient, more secure. And this refactoring can, in a way, help prevent bugs. Of course, you could also apply the same refactoring to the monolithic app to prevent the same bugs, but you don't because it's so monolithic that you're afraid of touching something in the UI and breaking business logic ¯_(ツ)_/¯


                                            So I wouldn't say you're preventing bugs just by breaking a monolithic app into smaller components, but you're indeed making it easier to reach a point in which bugs can be more easily prevented.






                                            share|improve this answer




























                                              0















                                              Does [it] help prevent bugs?




                                              Prevent? Well, no, not really.





                                              • It helps detect bugs.

                                                Namely all the bugs you didn't even know you had, that you only discovered when you tried to split that whole mess into smaller parts. So, in a way, it prevented those bugs from making their appearance in production — but the bugs were already there.


                                              • It helps reduce the impact of bugs.

                                                Bugs in monolithic applications have the potential to bring down the whole system and keep the user from interacting with your application at all. If you split that application into components, most bugs will —by design— only affect one of the components.


                                              • It creates a scenario for new bugs.

                                                If you want to keep the user experience the same, you will need to include new logic for all those components to communicate (via REST services, via OS system calls, what have you) so they can interact seamlessly from the user's POV.

                                                As a simple example: your monolithic app let users create a model and animate it without leaving the app. You split the app in two components: modeling and animation. Now your users have to export the modeling app's model to a file, then find the file and then open it with the animation app... Let's face it, some users are not gonna like that, so you have to include new logic for the modeling app to export the file and automatically launch the animation app and make it open the file. And this new logic, as simple as it may be, can have a number of bugs regarding data serialization, file access and permissions, users changing the installation path of the apps, etc.


                                              • It is the perfect excuse to apply much needed refactoring.

                                                When you decide to split a monolithic app into smaller components, you (hopefully) do so with a lot more knowledge and experience about the system than when it was first designed, and thanks to that you can apply a number of refactors to make the code cleaner, simpler, more efficient, more resilient, more secure. And this refactoring can, in a way, help prevent bugs. Of course, you could also apply the same refactoring to the monolithic app to prevent the same bugs, but you don't because it's so monolithic that you're afraid of touching something in the UI and breaking business logic ¯_(ツ)_/¯


                                              So I wouldn't say you're preventing bugs just by breaking a monolithic app into smaller components, but you're indeed making it easier to reach a point in which bugs can be more easily prevented.






                                              share|improve this answer


























                                                0












                                                0








                                                0








                                                Does [it] help prevent bugs?




                                                Prevent? Well, no, not really.





                                                • It helps detect bugs.

                                                  Namely all the bugs you didn't even know you had, that you only discovered when you tried to split that whole mess into smaller parts. So, in a way, it prevented those bugs from making their appearance in production — but the bugs were already there.


                                                • It helps reduce the impact of bugs.

                                                  Bugs in monolithic applications have the potential to bring down the whole system and keep the user from interacting with your application at all. If you split that application into components, most bugs will —by design— only affect one of the components.


                                                • It creates a scenario for new bugs.

                                                  If you want to keep the user experience the same, you will need to include new logic for all those components to communicate (via REST services, via OS system calls, what have you) so they can interact seamlessly from the user's POV.

                                                  As a simple example: your monolithic app let users create a model and animate it without leaving the app. You split the app in two components: modeling and animation. Now your users have to export the modeling app's model to a file, then find the file and then open it with the animation app... Let's face it, some users are not gonna like that, so you have to include new logic for the modeling app to export the file and automatically launch the animation app and make it open the file. And this new logic, as simple as it may be, can have a number of bugs regarding data serialization, file access and permissions, users changing the installation path of the apps, etc.


                                                • It is the perfect excuse to apply much needed refactoring.

                                                  When you decide to split a monolithic app into smaller components, you (hopefully) do so with a lot more knowledge and experience about the system than when it was first designed, and thanks to that you can apply a number of refactors to make the code cleaner, simpler, more efficient, more resilient, more secure. And this refactoring can, in a way, help prevent bugs. Of course, you could also apply the same refactoring to the monolithic app to prevent the same bugs, but you don't because it's so monolithic that you're afraid of touching something in the UI and breaking business logic ¯_(ツ)_/¯


                                                So I wouldn't say you're preventing bugs just by breaking a monolithic app into smaller components, but you're indeed making it easier to reach a point in which bugs can be more easily prevented.






                                                share|improve this answer














                                                Does [it] help prevent bugs?




                                                Prevent? Well, no, not really.





                                                • It helps detect bugs.

                                                  Namely all the bugs you didn't even know you had, that you only discovered when you tried to split that whole mess into smaller parts. So, in a way, it prevented those bugs from making their appearance in production — but the bugs were already there.


                                                • It helps reduce the impact of bugs.

                                                  Bugs in monolithic applications have the potential to bring down the whole system and keep the user from interacting with your application at all. If you split that application into components, most bugs will —by design— only affect one of the components.


                                                • It creates a scenario for new bugs.

                                                  If you want to keep the user experience the same, you will need to include new logic for all those components to communicate (via REST services, via OS system calls, what have you) so they can interact seamlessly from the user's POV.

                                                  As a simple example: your monolithic app let users create a model and animate it without leaving the app. You split the app in two components: modeling and animation. Now your users have to export the modeling app's model to a file, then find the file and then open it with the animation app... Let's face it, some users are not gonna like that, so you have to include new logic for the modeling app to export the file and automatically launch the animation app and make it open the file. And this new logic, as simple as it may be, can have a number of bugs regarding data serialization, file access and permissions, users changing the installation path of the apps, etc.


                                                • It is the perfect excuse to apply much needed refactoring.

                                                  When you decide to split a monolithic app into smaller components, you (hopefully) do so with a lot more knowledge and experience about the system than when it was first designed, and thanks to that you can apply a number of refactors to make the code cleaner, simpler, more efficient, more resilient, more secure. And this refactoring can, in a way, help prevent bugs. Of course, you could also apply the same refactoring to the monolithic app to prevent the same bugs, but you don't because it's so monolithic that you're afraid of touching something in the UI and breaking business logic ¯_(ツ)_/¯


                                                So I wouldn't say you're preventing bugs just by breaking a monolithic app into smaller components, but you're indeed making it easier to reach a point in which bugs can be more easily prevented.







                                                share|improve this answer












                                                share|improve this answer



                                                share|improve this answer










                                                answered Mar 13 at 17:45









                                                walenwalen

                                                26726




                                                26726















                                                    Popular posts from this blog

                                                    Nidaros erkebispedøme

                                                    Birsay

                                                    Was Woodrow Wilson really a Liberal?Was World War I a war of liberals against authoritarians?Founding Fathers...