• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Stubborn programmers

Malogeek

Golden Member
I am very stubborn, I admit it. I just spent the entire morning arguing with a guy in the database division that their data output structure was invalid so I didn't have to add in 2 lines of code to compensate for it. The 2 lines of code isn't the issue of course, it's the fact that they refuse to acknowledge the issue and that it's my problem to write my logic differently to compensate. I despise having to put in IF statements for an issue that shouldn't exist in the first place if it was done right.

Venting....
 
You did the right thing. They may interface with other systems other than yours; it needs to be fixed on their end.

But maybe next time... offer to help them fix it in heir end?
 
But maybe next time... offer to help them fix it in heir end?
I did offer a couple of solutions, one a quick and dirty fix that wouldn't impact anyone and another more extensive fix for long term. Both were ignored. I work with databases on a daily basis for my projects so it's not like I'm telling a mechanic how to fix my car. He obviously did not understand from my coding perspective though.

And yes the same output is used for other divisions and I'm contacting them to find out how they deal with this instance.
 
If you are regularly having to do any kind of data manipulation after pulling data from the backend then there should be changes to the backend code to retrieve the data in the way the front end needs to have the data. There's really no arguing the other way.
 
If you are regularly having to do any kind of data manipulation after pulling data from the backend then there should be changes to the backend code to retrieve the data in the way the front end needs to have the data. There's really no arguing the other way.

I'm the general sense, I actually disagree. Let me play a little devil's advocate.

Data should be represented in such a way that it makes sense in its particular domain. Data leaving the system to be consumed by another system should not be transformed. It should be the consumer's responsibility to transform the data. This is primarily because consumers generally operate in a different domain; domain boundaries should remain as isolated as possible.

Another way to say what I just said is "code to an interface," which everyone says.

Here's what I mean.

Suppose I have a Customer service that does CRUD for customers demographic info (name, address, phone number, etc). Then suppose I also have an Inventory service that knows everything about every item in my store. When I do a "Get Customer" to the Customer service, I get all sorts of customer demographic info for a customer; same idea for "Get Item" in the Inventory service.

So let's say I have an Order History service. Presumably, it would need to know who bought what, right? So internally, its domain would consist of a Customer ID, a time stamp, and a list of Item ID. This all makes perfect sense for this domain.

But a bunch of IDs are pretty useless to the user, right? Okay, so you need a transform. If you're trying to present an order history to a user, you'd want to consume all 3 services, join all the data together, project it to get rid of fields you don't care about, and transform it to get the time stamps to be in the user's locale.

Each service should make no assumption as to how it will be used. So, the only thing it can do is make sure that a domain/interface makes sense on its own.

Hope that makes sense!
 
I'm the general sense, I actually disagree. Let me play a little devil's advocate.

Data should be represented in such a way that it makes sense in its particular domain. Data leaving the system to be consumed by another system should not be transformed. It should be the consumer's responsibility to transform the data. This is primarily because consumers generally operate in a different domain; domain boundaries should remain as isolated as possible.

Another way to say what I just said is "code to an interface," which everyone says.

Here's what I mean.

Suppose I have a Customer service that does CRUD for customers demographic info (name, address, phone number, etc). Then suppose I also have an Inventory service that knows everything about every item in my store. When I do a "Get Customer" to the Customer service, I get all sorts of customer demographic info for a customer; same idea for "Get Item" in the Inventory service.

So let's say I have an Order History service. Presumably, it would need to know who bought what, right? So internally, its domain would consist of a Customer ID, a time stamp, and a list of Item ID. This all makes perfect sense for this domain.

But a bunch of IDs are pretty useless to the user, right? Okay, so you need a transform. If you're trying to present an order history to a user, you'd want to consume all 3 services, join all the data together, project it to get rid of fields you don't care about, and transform it to get the time stamps to be in the user's locale.

Each service should make no assumption as to how it will be used. So, the only thing it can do is make sure that a domain/interface makes sense on its own.

Hope that makes sense!

I still disagree with what you're saying. In your example, the "order history service" should give you the data back with the users name, and then the list of history items should include the product information. All of that should be done server side and bring the data back. Then loop through the list of history items and display them how you need. With modern js libraries out there, there is no need to "transform" the time to the current users locale - just use moment.js and pass in the timestamp (or whatever date format you are using) and it will do that for you.

I'm not sure either if you are implying that you should get the history for a user, then make another call to get the user's name from the client based on the userId in one of the history items, and then go get the product information for each history entry. I hope that's not what you are saying should be done, because that would be a terrible idea. If you want to get the history for a user, it should be just 1 call you make to the backend and return a json object with all of the data you need.

I'm not sure if you've ever used graphQL, but if you have it set up you can make it so you can pass parameters when hitting the backend so that it comes back in the format/order/sort that you are expecting. Again, if you are having to do data manipulation on the client, you are doing it wrong. The least amount of data manipulation on the client the better.

And by that, I mean if you have control over the backend services. I'm not talking about if you are hitting a 3rd party API where you have no control over it, because of course in that instance you could potentially have to do data manipulation since you have no control on how the data comes back. But I thought that was obvious that we weren't talking about that case, so if not, my bad.
 
So GraphQL literally does exactly what I'm saying to do. GraphQL does the manipulation external to each dependent service's domain.

This isolation is important. It allows you to maintain and deploy each service independently, which mean each domain of a system is decoupled as much as possible from others.

As weird as it may sound, yes, I'm implying that you should need to make multiple calls to get your data ready to use. GraphQL is one particular way of doing it.

I think you took my example too literally... a date transform is trivial; I was just trying to give a simple, easy to follow example of a transform.

Take a look at some microservice architecture literature. This is an oldie, but goodie: https://martinfowler.com/articles/microservices.html
 
So GraphQL literally does exactly what I'm saying to do. GraphQL does the manipulation external to each dependent service's domain.

This isolation is important. It allows you to maintain and deploy each service independently, which mean each domain of a system is decoupled as much as possible from others.

As weird as it may sound, yes, I'm implying that you should need to make multiple calls to get your data ready to use. GraphQL is one particular way of doing it.

I think you took my example too literally... a date transform is trivial; I was just trying to give a simple, easy to follow example of a transform.

Take a look at some microservice architecture literature. This is an oldie, but goodie: https://martinfowler.com/articles/microservices.html
The bolded is a terrible idea, especially if you don't have to do that and have control of how the data comes from the backend. Making 10 calls to the backend when you could potentially make just 1 call is a huge waste of resources and time and terrible practice. Seriously, if new people are reading this thread, just forget you ever read the bolded.

And if you're using graphQL to do it this way, you're doing it wrong x2. The whole point of graphQL is to get just the data you need back in the way you need it.

GraphQL does all of the data manipulation on the server before you get it back, not the client, so I am not sure how you can say it does exactly what you're saying. You have been saying to do data manipluation on the client the whole time.
 
No, all I've said is that each service should not manipulate data, that it should provide an interface that makes sense within its domain. This is because each service should make no assumptions as to how it will be used by other services. Consumers should be responsible for combining each domain into something usable for what they're doing. I still haven't said anything about a client.

GraphQL is one way of combining domains. Data sources in GraphQL are implemented as "Resolvers" in their lingo. They can be databases, flat files, but commonly in 2017, other services (see my previous post with a link to an explanation of microservice architecture). So basically, GraphQL would make calls to your N internal services on your behalf, put the data together however you set it up in your scheme, and return it. Yet again, I've said nothing about a client.

Now if you think about this, GraphQL's sole purpose is to translate a set of domains into another. So it, in itself, is a microservice with its own domain, which happens to be a transform. Nothing about a client yet.

So let's discuss - what, exactly, is a client? Most people only consider a phone app or web browser as a client. This isn't strictly true and if this is the only way you perceive it, you will end up with anti-patterns. Anything that uses something else is a client. So if GraphQL uses your Customer service in a resolve function to get data, then it is a client. Now, for the sake of clarity, we typically say "consumer" for the generalized terminology, such that "client" can continue to be used for the vernacular web browser.

Now let's talk about the system and data manipulation from the client-side, web browser side. You could make a single call to your GraphQL instance if you'd like. Your other option is to make N calls to your N services if you'd like. Before you say that's bad, be open minded and keep reading. Regardless of which option you choose, you're still doing the exact same thing: do N calls and a transform from the client side. It just so happens that with the GraphQL option, N is just 1 and the transform is just a no-op.

So why would you want to call each individual service from the client side and make multiple calls? Depending on your application, it may be a positive tradeoff due to scaling reasons. Doing it completely from the client side, while it increases network overhead to the client (and possibly CPU/memory overhead on the client, too, if your transform is heavy), it allows you to employ fault tolerant practices, such as a circuit-breaker pattern, graceful degradation, and so on. In applications where partial data may suffice for limited functionality, this architecture would be more reliable than GraphQL since it eliminates a single point of failure. In practice, making N calls over a single call from the client doesn't really increase latency a whole lot since most or all of them can be done in parallel.

If you want a real-world example as to what I'm talking about, read up on how Netflix implements their LoLoMos (list of list of movies) in their clients. All of the data is fetched on the client side using N calls and the client deals with failure in a seamless way, presenting partial data to the user in the case of an outage. Ever wonder why sometimes certain lists of movies ("popular on Netflix," etc) don't show up sometimes, but the rest of Netflix is working just fine? This is why.

So you said new people shouldn't take my advice. I'll say this - my advice is a modern, advanced approach that has made Netflix, Amazon, Google, Uber, eBay, and many other tech companies successful. I'm not that smart; I learned it from them. But if I were interviewing someone who said this is bad, they wouldn't qualify for a senior position. Think what you want. 🙂
 
Last edited:
Well now you're changing the scope of things. We were initially talking about making small calls to get an order history example. We were talking about the "general sense" as you said. And now you are talking about services with 10s and 100's of millions of users. There are a handful of companies on the level of the ones you mentioned, so I wouldn't really consider that the "general sense" of development. What you're saying makes sense at scale though and given the situation you were talking about. I was talking more about the "general sense" of things.
 
Last edited:
As weird as it may sound, yes, I'm implying that you should need to make multiple calls to get your data ready to use. GraphQL is one particular way of doing it.

That is a terrible idea. First it could be done in 1 call and if the data is in the same database it's is even dumber to make it in 3 calls. It's a great example of micro services driven to the extreme being bad compared to a monolith. The backend should provide the exact data the consumer wants, if it can't do it, then you need a new method there. If you do tons of array and list sorting and manipulation stuff with data from a database you are doing it wrong because the database should do that and the database will be orders of magnitudes faster at it.
 
That is a terrible idea. First it could be done in 1 call and if the data is in the same database it's is even dumber to make it in 3 calls. It's a great example of micro services driven to the extreme being bad compared to a monolith. The backend should provide the exact data the consumer wants, if it can't do it, then you need a new method there. If you do tons of array and list sorting and manipulation stuff with data from a database you are doing it wrong because the database should do that and the database will be orders of magnitudes faster at it.
You've made a deadly assumption: "the" database. Why would you assume there's only one database? In fact, modern approaches tend toward each stateful service having its own database.

Lines are usually a bit blurry in the real world, but this is essentially how modern software is engineered.
 
You've made a deadly assumption: "the" database. Why would you assume there's only one database? In fact, modern approaches tend toward each stateful service having its own database.

Lines are usually a bit blurry in the real world, but this is essentially how modern software is engineered.

If you have customers, orders and items and put them in different databases you are a moron. Or in general anything with a direct relationship. I don't mean you, don't want to offend but you get my point I hope.
 
Okay. 🙂

Remember when you'd be a moron to write server-side code in JavaScript? Things change.

One size does not fit all. There are tradeoffs for everything. But to say any one approach is wrong is naive. Without understanding this, you would fail technical interviews at many places.

I'll leave it there.
 
Remember when you'd be a moron to write server-side code in JavaScript? Things change.

I still think you are a moron using node ecosystem for production software (especially from the management/companies perceptive). Just remember when that one guy pulled his crappy left-pad code and half of the npm dependencies failed because they all used it. People adding a very crappy left-pad as a dependency tells you everything about the crowd.

Yeah I know some big websites use. Keep in mind I'm talking from the average programmers viewpoint here. The average programmer isn't working for one of the top websites that more or less have limitless funds for playing around and experimenting and in general are more tech oriented companies. The average programmer will just do fine with a boring Java, .net or python framework.
 
Back
Top