Originally posted by: Flyback
Originally posted by: Descartes
Originally posted by: Flyback
Originally posted by: sao123
Originally posted by: Flyback
Software as a Service is not a new idea.
I've thought about it in the past but never realized its full potential. The key benefit is simply that you can control access to the product and cut it off at any time if people don't pay. Also, you can go to a subscription based model instead. Less piracy, too. It lets you increase profit.
But something I realized more recently is that in some cases (not all), you could protect something pretty advanced by never actually having to distribute a binary/the technology. There would be no potential to reverse engineer the underlying idea or algorithm.
It would only work in certain types of systems, but things that are heavily algorithm-oriented which rely on some key algorithm no one else has could benefit from this. There would be additional overhead costs like if you need the hardware to host servers for each client to do xyz task (as with all Software as a Service), but you could pass the charge back to them. This would allow you to keep algorithms completely private and undisclosed. If it was something really quite groundbreaking you wouldn't patent it at all (or publish the inner-workings in any means).
Without patenting it or publishing any material, no one can just hop onto the idea when a normal patent would expire (don't you have to detail how it works in your patent application/file ?).
The only idea I can think of where this would work really well is in software that is heavily dependent on algorithms. If you developed some really unique algorithm ahead of its time that wouldn't likely be solved or discovered by anyone else in the near future you could maximize the return on it.
What do you think?
software as a service will ultimately fail because people had delusional paranoidness...
CableTV works because it is merely full spectrum content delivery... there is basically no talkback, which channel you watch is virtually undetectable, so there is no "big brother" in your technology...
With interactive programming such as software, you lose the benefit of this privacy...
additionally, the internet does not have the bandwidth to host all applications for all the PC users in the world. Not even close.
I wouldn't be so quick to dismiss it. Lots of companies aren't asking--they are dictating--that their newer software comes as a service.
I understand privacy concerns (and that is my largest concern) but it doesn't really matter because a lot of businesses stand to profit from SaaS and will increasingly move towards it.
(I know that SaaS != Web App, but in many cases there is lots of overlap.)
Not all applications can be hosted obviously and things like a "web OS" are just pure bull IMO, but many many apps are moving into web app domain with increasing browser support for a full-on GUI (beyond simple Javascript+AJAX... I mean like XUL interfaces and whatnot).
Again, the idea I had would only work in systems that are really heavily dependent on algorithms and not something like a calendar program.
I wouldn't be so quick to dismiss it. I was bootstrapping OS' (I had trying to pluralise acronyms that end in S) years ago, and the loader was on a remote server. Loading something as complex as Windows isn't feasible, but again, newer development platforms that load/compile code JIT (just in time) allow you to fragment your deployment model.
I'll admit most of my lack of faith in it comes from a misunderstanding. I just think it would take an incredible amount of bandwidth to happen over the coming 5 years. What would a WebOS look like? I think people largely use the wrong terminology--what they have in mind is perhaps something very lightweight. I'll have to read up if I get some extra time.
Web-deployed need not mean web-based. You will always need a loader, because all machines capable of running some non-trivial OS have to look for the initial loader somewhere on disk/memory/whatever. This very basic loader then has the ability to either continue loading from disk, or to instead refer to another server, and that server can be available on any medium. Unix has done this for years, Windows has some ability to do so as well, and even smaller devices (namely Smartphones) use this model to some degree; they give you the initial loader and perhaps a limited user-interface, but the actual applications, etc. are retrieved as needed, cached in memory as needed and discarded when not needed.
I think the key point here is to ignore web anything; instead, just consider the idea of abstracting away the initial loader (in the case of a PC, the BIOS in combination with the system files, MBR, etc. that exist in the first sector of the hard drive) of some device and subsequent operating system it uses. A weak analogy might be to suggest something like MS-DOS being loaded on the machine and Windows 3.11 being loaded when requested. That would be completely feasible for small deployments given current bandwidth availability, albeit at a significant drop in functionality.
Further, I still think you could keep some algorithms secret. Everything can be reverse-engineered if you have access. It becomes more difficult with a very complicated algorithm for which you don't have access. Simple ones? Yeah, they wouldn't be so hard.
My fundamental approach to security is simple: Don't make available anything that you ultimately can't afford to lose. If you're sitting on an algorithm that's generating billions in revenue for your company, someone
will find a way to reverse engineer it.
It's a variation of the old security-by-obscurity argument. People think that if something isn't well known, accessible, etc. that it's inherently more secure than that which is more open, but we know the opposite to be true. This doesn't consider intellectual property concerns of course. Consider how some people are turned away from runtimes like Java or .NET simply because they think it's inherently insecure due to the bytecode/intermediate language (for Java and .NET, respectively) used. The reality is that the only different is the level of abstraction, and someone wanting your secrets can just as well do a simple reverse-engineer of your x86-compiled code as they can Java or .NET.
The point: If you allow any of these algorithms to actually execute on a user's machine, it can and will be reverse-engineered if there is thought to be value in doing so by others.
I think I'm repeating myself now
😀
I guess I just had a particular case in mind when writing the op 😉
There is definitely something viable in the model, but the proverbial devil is in the details. How does that saying go? Those who say something can't be done shouldn't stop those that are. So, if you have an idea... go for it
😀
[edit]Thought I'd say I apologize in advance if any of the above is patronizing. Not trying to tell you anything you probably already know...[/edit]