The future of software? Predictions?

krackato

Golden Member
Aug 10, 2000
1,058
0
0
I was just wondering what everyone thinks the future of software is going to be. Specifically, with the tech sector in such bad shape especially AMD, I think everyone agrees that processor performance has progressed faster than our current software needs. I think 1ghz is pretty damn powerful (which it is), but now that we're about to break 3ghz, I don't really know what the average user is going to need all that power for apart from games.

Use your imagination, or post links to developing software. Apart from games and distributed computing, how are people going to be able to make better use of their overly powerful processors in the future so that speeds continue to increase and prices continue to fall benefiting us techies who can never seem to get our Seti units pumped out fast enough?

In addition, what kind of software do you imagine our 10-12ghz machines could run in 3-4 years time?
 

singh

Golden Member
Jul 5, 2001
1,449
0
0
Technologies like Voice Recognition require lots of CPU Power. We might also start seeing 'true' 3D User Interfaces. Another use might be embedding AI technology (which will definitely require lots of Gigahertz).

As far as the developers are concerned, the more power the better. Of course, mass storage speed also needs to keep improving to fully benefit from the extra power available.
 

krackato

Golden Member
Aug 10, 2000
1,058
0
0
Is that it? Voice recognition and 3d interfaces are all that we have to look forward to? Computers are already fast enough for voice recognition, and I fail to see how a 3d interface is going to be anything more than eye candy.

As far as developers, sure the more power the better, but I'm talking about the consumer. When the average consumer can't tell the difference between a 1Ghz and a 3Ghz, what's the motivation for upgrading?

BTW, A.I. I'll give you that one. But is everyone going to have to wait for A.I. to have a 'reasonable' and compelling reason to upgrade?
 

sandorski

No Lifer
Oct 10, 1999
70,791
6,350
126
I think you have a point, most people don't need the power now available, although a significant number still do. Graphics, especially for gaming, is probably the most significant need for more power right now, but even that is becoming more the realm of graphics chips rather than cpus. I still think that increased cpu power will continue to be a concern, but eventually(and likely soon) video cards will remove(significantly lessen) gaming as a reason for cpu upgrades. When that happens either some new app will need to revolutionize the industry or large capacity cpu producers like Intel and AMD will need to focus on low power/low heat processors in order to survive(this actually will probably be easier for them to accomplish if they can successfully produce cpus that are faster than what is needed, IMO).
 

aircooled

Lifer
Oct 10, 2000
15,965
1
0
Obviously the avereage Word/Excel/Websurfer user doesn't need anything over a few hundreds mhz. But the Anandtech level of user that Games, edits video/graphics, programs & dabbles in new technology will definatly benifit from higher clock speeds.

I'd like to see 5ghz processors and fiber to the home. I figure the 5ghz isn't too far away but fiber to the home is probably 10 years (or more) away.

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Forget CPU, hardware developers need to focus on the major bottlenecks. The biggest one rightnow seems to be I/O. Most CPUs spend most of their time idle (some of us run a DC project, so thats not true for us), and much of it is waiting for I/O.
 

IamDavid

Diamond Member
Sep 13, 2000
5,888
10
81
Future of Software developement...

Companies and coders will continually get lazier and lazier writing code because they know the hardware is getting faster and faster and there is no longer an a space issue (most people have huge hard drives).. Eventually all programs will be as bloated and as poorly written as Real Player is. I can think of very few if any programs that have evolved over the years into a faster sleeker program. Look at business apps (MS Office), or multimedia apps (Winamp), or any other type, they are getting bigger and bigger for no reason.. Give me small, stable, boring programs over massive bloated crap any day..

I know my view is a bit harsh but until I see anything different its what I'll believe..
 

LiLithTecH

Diamond Member
Jul 28, 2002
3,105
0
0
Forget CPU and I/O.

What is really needed is better programming

The advent of Faster Processors and larger storage capacities has lead to sloppy, over bloated
programming code.

The Original GEM (XEROX-GUI) OS fit on a 2k Rom. The original Windows 3.1 came on 6-720k floppies.
The next iteration of Windows (64 bit) will more than likely reside on 2 or 3-600mb CD's.




 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Forget CPU and I/O.

What is really needed is better programming

All 3 are needed, for different reasons. You shouldn't hold the hardware back just because you can make the software better, make both better because people will find uses for all of it.

The Original GEM (XEROX-GUI) OS fit on a 2k Rom

How many devices did it support? How many languages? If Windows ran on a limited (say upwards of 5) number of PC models it would be much smaller too.

The next iteration of Windows (64 bit) will more than likely reside on 2 or 3-600mb CD's.

Not necessarily, if it does the main reason will probably be because it'll include the 32-bit version too like the older NT CDs that had all 4 supported arches on 1 CD.

And did you read the list of unsupported features in Win64 right now? It reads like the feature list of XP that MS gives for upgrading from Win2K.
 

ProviaFan

Lifer
Mar 17, 2001
14,993
1
0
Artificial Intelligence? What would it be used for? I can't think of any really compelling applications, unless it might be that your computer is "smart" and "anticipates" your actions by - for example - figuring out what link you're most likely going to click next and downloading its page while you're still reading the current one. But then, that's already what the processor does with its L1 and L2 caches, and the operating system uses a disk cache for a similar purpose. That's not really AI, I suppose. Which brings me back to the beginning - what kind of AI applications would there be, to begin with?

Voice Recognition? On small handheld devices, and other unique applications, I can really see voice recognition becoming useful, once the barriers of processing power for such devices are expanded. On a desktop PC? Yea, if the additional power makes it almost 100% accurate, then it might just take off as a method of dictation for writing letters and stuff. But really, do you think most people would use it for everyday tasks like chatting on IRC, or browsing the web? My throat would get awfully sore and dry, considering how much web browsing and IRC that I do. ;)

And 3D UIs? Microsoft experimented with them several years ago (it's on the MS research site, but I'm too lazy to dig up a link right now). Apparently even with the technology back then, the performance of a basic 3D UI wasn't too bad, but nevertheless, what's the point of one? I may be skeptical, but IMHO they're pretty much useless.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Which brings me back to the beginning - what kind of AI applications would there be, to begin with?

Better quake bots? Interactive movies where the other 'actors' respond to the user's dialog realistically, see you can combine AI and Voice recognition there =)
 

Turkey

Senior member
Jan 10, 2000
839
0
0
Amateurs ;)

Alright, here's how it is:

Back in the day, computing power was cool because it allowed a lot of automation to happen, like reliably crunching thru math equations. Then you had your productivity tools, which allowed you to create documents much more quickly than using pen and paper. Then Al Gore invented the Internet, and suddenly the PC was a good way of accessing remote information and then an interesting way of communicating with others. So that's basically where we are today... with me sitting here and you sitting there, posting away on Anandtech. So throughout the years, a PC has basically been designed for a single person to use the PC at a time (one keyboard, one mouse, one 17" or so monitor) or for many people to share a processor as if they were the only person using the PC. But this is very limiting... for example if I'm doing a school group project, and each person writes a portion of a paper, they all have to email their sections to a single person, who then has to combine everyone's pieces, and then everyone has to re-read the paper to make sure it's all good, then everyone sends back comments, and it takes forever. Alternatively, everyone sits around a screen while one person types, and each person points to a place on the screen and says "change this." In either case, in the final document it's impossible to tell who wrote what and why. So this process could be dramatically improved... create a multiple user interface where everyone can participate in the creation and editing of the document simultaneously, and where the document tracks who contributed what so that it's easy to consult the person on their rationale. The multiple user interface wouldn't even have to be with people in the same room... I could be sitting in my room, and you in yours, and we could both be editing the same document and the interface would track who made what changes. Coincidentally, this would be a good use for 3d UIs... each person might get a layer, or when a data conflict occurs, the conflicting data could be presented along the z axis... a totally flat document would have no conflicts, a bumpy document would have multiple conflicts, and each person would be represented by a different color in the conflict stack.

Another direction that I see the industry heading is towards more support for handwriting... not just text recognition, but also for the following case: I'm reading an Acrobat document for a class, and I see something I want to highlight. Can't be done! I have to print it out and highlight it. "Well that's a pretty easy mod to Acrobat!" you say, but then this applies to any program on your PC, and why should I be restricted to highlighting? What if I want to add comments in the margin, or draw arrows, or whatever? The interface has to have native support for persistent freehand editing (save, close, reopen and the comments are still there) of any document, and of course the multiple user rules must still apply - I want to know if it was Joe or Jim who wrote the insightful comment in the margin.

Software creation will change... there are already UML tools that create Java or C++ code from UML diagrams. It's only a short time before UML becomes more refined and standardized, and is able to completely specify the actions of software so that the Java output will be unnecessary and the UML tool will become a compiler, ie it will turn diagrams directly into bytecode/assembly. As a result, we may see more and more people able to write their own software, sort of like how the emergence of "high level languages" like c++ and Java have enabled many more people to write software than when software was only written in assembly.

And then all the standard software predictions... smaller screens will become more prevalent, so people will be using pens more instead of mice, programs will grow in size and functionality, web services will be big, etc...
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Turkey
Amateurs ;)

The difference between being an amateur guessing at what is going to happen and being a professional "the future of software analyst" is like the difference bween sex and masturbation. I predict things like this to have fun and interact with people, the analysts do this to make themselves feel better. :D

Alright, here's how it is:

Back in the day, computing power was cool because it allowed a lot of automation to happen, like reliably crunching thru math equations. Then you had your productivity tools, which allowed you to create documents much more quickly than using pen and paper. Then Al Gore invented the Internet, and suddenly the PC was a good way of accessing remote information and then an interesting way of communicating with others. So that's basically where we are today... with me sitting here and you sitting there, posting away on Anandtech. So throughout the years, a PC has basically been designed for a single person to use the PC at a time (one keyboard, one mouse, one 17" or so monitor) or for many people to share a processor as if they were the only person using the PC. But this is very limiting... for example if I'm doing a school group project, and each person writes a portion of a paper, they all have to email their sections to a single person, who then has to combine everyone's pieces, and then everyone has to re-read the paper to make sure it's all good, then everyone sends back comments, and it takes forever. Alternatively, everyone sits around a screen while one person types, and each person points to a place on the screen and says "change this." In either case, in the final document it's impossible to tell who wrote what and why. So this process could be dramatically improved... create a multiple user interface where everyone can participate in the creation and editing of the document simultaneously, and where the document tracks who contributed what so that it's easy to consult the person on their rationale. The multiple user interface wouldn't even have to be with people in the same room... I could be sitting in my room, and you in yours, and we could both be editing the same document and the interface would track who made what changes. Coincidentally, this would be a good use for 3d UIs... each person might get a layer, or when a data conflict occurs, the conflicting data could be presented along the z axis... a totally flat document would have no conflicts, a bumpy document would have multiple conflicts, and each person would be represented by a different color in the conflict stack.

Word and other productivity suites already do this. Word can take advice from one user, have it show up in the document for another user. The second user can then modify the original based on the advice or whatnot. Its kind of neat, but not a huge feature in my opinion (I never did anything that this would be prevelant in).

Another direction that I see the industry heading is towards more support for handwriting... not just text recognition, but also for the following case: I'm reading an Acrobat document for a class, and I see something I want to highlight. Can't be done! I have to print it out and highlight it. "Well that's a pretty easy mod to Acrobat!" you say, but then this applies to any program on your PC, and why should I be restricted to highlighting? What if I want to add comments in the margin, or draw arrows, or whatever? The interface has to have native support for persistent freehand editing (save, close, reopen and the comments are still there) of any document, and of course the multiple user rules must still apply - I want to know if it was Joe or Jim who wrote the insightful comment in the margin.

Apple is supposed to have one of the best handwriting recognition engines in the world. Inkwell is the new name for it (based on Newton tech I believe). Expect applications to take advantage of it in the future. IBM also has a "notebook" like thing that has a pad you can write on. I have never seen one in person so I wouldnt be surprised if it died off.

Software creation will change... there are already UML tools that create Java or C++ code from UML diagrams. It's only a short time before UML becomes more refined and standardized, and is able to completely specify the actions of software so that the Java output will be unnecessary and the UML tool will become a compiler, ie it will turn diagrams directly into bytecode/assembly. As a result, we may see more and more people able to write their own software, sort of like how the emergence of "high level languages" like c++ and Java have enabled many more people to write software than when software was only written in assembly.

And then all the standard software predictions... smaller screens will become more prevalent, so people will be using pens more instead of mice, programs will grow in size and functionality, web services will be big, etc...

I disagree on the pen thing, I think the mouse's design will change, but whatever new design comes along it will be basically the same low level concept of the mouse. Webservices are already big, but I dont see them taking over the industry (mainframes of old style), and not a whole lot of the large programs really grow in functionality, only size. This is all my opinion only of course. :)
 

DJSnairdA

Golden Member
Dec 30, 2000
1,018
0
0
This debate is similar to the car one, whether or not we'll have flying cars in a few years/decades/centuries..

I think we can all draw the same conclusion: We don't know. The answer to the future of software is placed in the hands of the programmers, and if you're a programmer now, you have the ability to do that! :)

I can't see software changing too much in the near future because what we have now is appealing and people are using it, and change can sometimes have disastrous effects, which i'm sure developers are not yet willing to take, unless of course the final product is a sure winner.

What i do disagree about, is 13 year old computer geeks talking their parents into buying the top of the range hardware for a status symbol at school :)
 

ProviaFan

Lifer
Mar 17, 2001
14,993
1
0
Originally posted by: n0cmonkey
Originally posted by: Turkey
Amateurs ;)
The difference between being an amateur guessing at what is going to happen and being a professional "the future of software analyst" is like the difference bween sex and masturbation. I predict things like this to have fun and interact with people, the analysts do this to make themselves feel better. :D
LMAO, but that's a very good point. ;)
Alright, here's how it is:

Back in the day, computing power was cool because it allowed a lot of automation to happen, like reliably crunching thru math equations. Then you had your productivity tools, which allowed you to create documents much more quickly than using pen and paper. Then Al Gore invented the Internet, and suddenly the PC was a good way of accessing remote information and then an interesting way of communicating with others. So that's basically where we are today... with me sitting here and you sitting there, posting away on Anandtech. So throughout the years, a PC has basically been designed for a single person to use the PC at a time (one keyboard, one mouse, one 17" or so monitor) or for many people to share a processor as if they were the only person using the PC. But this is very limiting... for example if I'm doing a school group project, and each person writes a portion of a paper, they all have to email their sections to a single person, who then has to combine everyone's pieces, and then everyone has to re-read the paper to make sure it's all good, then everyone sends back comments, and it takes forever. Alternatively, everyone sits around a screen while one person types, and each person points to a place on the screen and says "change this." In either case, in the final document it's impossible to tell who wrote what and why. So this process could be dramatically improved... create a multiple user interface where everyone can participate in the creation and editing of the document simultaneously, and where the document tracks who contributed what so that it's easy to consult the person on their rationale. The multiple user interface wouldn't even have to be with people in the same room... I could be sitting in my room, and you in yours, and we could both be editing the same document and the interface would track who made what changes. Coincidentally, this would be a good use for 3d UIs... each person might get a layer, or when a data conflict occurs, the conflicting data could be presented along the z axis... a totally flat document would have no conflicts, a bumpy document would have multiple conflicts, and each person would be represented by a different color in the conflict stack.
Word and other productivity suites already do this. Word can take advice from one user, have it show up in the document for another user. The second user can then modify the original based on the advice or whatnot. Its kind of neat, but not a huge feature in my opinion (I never did anything that this would be prevelant in).
Yea, but I think he's talking about a more interactive kind of multi-user editing. Netmeeting has drawing board features and such (that I've never used) which might be sort of useful in this kind of environment, but it's not really integrated in with the document creation programs.

Of course, programs have existed for some time (CVS comes to mind here) that would do exactly what you're talking about, but AFAIK they only work with multi-file projects, where different users can comment and edit different files, but not a single large file simultaneously. Though IMHO it probably wouldn't be too hard to make a CVS-like system to support this, the system would have to be able to understand and modify the specific format of document (something that AFAIK CVS can't do right now).
Another direction that I see the industry heading is towards more support for handwriting... not just text recognition, but also for the following case: I'm reading an Acrobat document for a class, and I see something I want to highlight. Can't be done! I have to print it out and highlight it. "Well that's a pretty easy mod to Acrobat!" you say, but then this applies to any program on your PC, and why should I be restricted to highlighting? What if I want to add comments in the margin, or draw arrows, or whatever? The interface has to have native support for persistent freehand editing (save, close, reopen and the comments are still there) of any document, and of course the multiple user rules must still apply - I want to know if it was Joe or Jim who wrote the insightful comment in the margin.
Apple is supposed to have one of the best handwriting recognition engines in the world. Inkwell is the new name for it (based on Newton tech I believe). Expect applications to take advantage of it in the future. IBM also has a "notebook" like thing that has a pad you can write on. I have never seen one in person so I wouldnt be surprised if it died off.
I expect handwriting recognition to become very prevalent in handheld devices (anyone else see THG's IDF 2002 video? in there a guy was demonstrating a handheld with handwriting stuff) in the future, but pen and touchpad devices on desktop PCs (which by the way are not going the way of the dodo, but that's another story...) will most likely remain mostly in the domain of graphic artists.
Software creation will change... there are already UML tools that create Java or C++ code from UML diagrams. It's only a short time before UML becomes more refined and standardized, and is able to completely specify the actions of software so that the Java output will be unnecessary and the UML tool will become a compiler, ie it will turn diagrams directly into bytecode/assembly. As a result, we may see more and more people able to write their own software, sort of like how the emergence of "high level languages" like c++ and Java have enabled many more people to write software than when software was only written in assembly.

And then all the standard software predictions... smaller screens will become more prevalent, so people will be using pens more instead of mice, programs will grow in size and functionality, web services will be big, etc...
I disagree on the pen thing, I think the mouse's design will change, but whatever new design comes along it will be basically the same low level concept of the mouse. Webservices are already big, but I dont see them taking over the industry (mainframes of old style), and not a whole lot of the large programs really grow in functionality, only size. This is all my opinion only of course. :)
Smaller screens? On the desktop? No, I think not. LCDs, with the new technology that will make them more responsive (is that out yet or not?), may take over CRTs in popularity, but I don't think they'll be getting smaller. Now, I'd love to see higher-resolution screens, but not much seems to have been done in that department any time recently, save IBM's huge (and expensive) flatpanel that runs at 3000x2000 something and has about 200dpi.

Oh, and for web services - no, they'll not really take off until dialup is dead (and that's probably not going to happen for another 20 years), and internet connections and hardware in general are much more reliable.

And mice? They'll be around, but perhaps in a radically different form. Some weird speculation on my part, but if (and that's a big IF) 3D UIs become more common, we may see a new type of "mouse" that facilitates movement in three dimensions. Any current mouse with a scroll wheel could be used, but it might be more practical to have some new form of input device. In fact, such input devices are available now; the SpaceBall 4000 by 3Dconnexion is one.

Edit for readability