søndag den 12. juli 2009

3 features could double the speed of software development

If the following 3 features was implemented in modern programming tools, then I would be twice as productive as a software developer:

§1.
When testing the software in debug mode and I hit an exception then the debugger should record all the steps and states that happened previously and I should be able to step back to any give step before the exception accrued and continue debugging from there.
It should also automatically set a break point at the line where the exception occurred and automatically jump right back to there. (today VS2008 jump to the catch statement when it hit an exception and the memory state is broken so that I cannot go back to before the exception occurred and get a view of the memory state of all objects.

§2.
When I create a new method, the software IDE should automatically generate an set of test cases for this method. Then I should be able to right click on the method and go directly to these test cases. And it should be easy to add new test input. Even if it is complex. (I.e. a DataSet or a Bitmap or a custom class)
Next the test cases shall be able to test the new method even when the entire project do not build. As long as the new method builds and the underlying classes etc. that it depend on builds.
I should not need to run the test cases manually. They should run in the background automatically every time I change something that relate to each method and the output (ok or not ok) should be listed in the same view as Errors and Warnings. Today VS2008 run the compiler in the background when developing VB.NET applications and continuously show a list of errors in the source code. I just want the same thing for method test cases.
When I am in debug mode, then I shall be able to right on a function call which has just completed and "Add to test case" and then the IDE shall automaticlly add that input/output as a new test case. Similar whenever an exception is caught in debug mode, then the IDE shall automatically add the test case of the function call, which resulted in the exception.
We would need this feature anyway if we want computers to be able to write their own software in the near future.

§3.
There should be an automatic refactoring feature. A tool that would be able to inspect a peace of source code and then be able to refactor it such that the same code was not written twice any where in the current project. (not one liners of cause, but it should inspect for patterns)
Then if some "stupid" programmer on my team, had copied my code from one method to another and just changed it a little in the second method. Then the computer should be able to automatically change this, such that the code would only be there one time. And me and the stupid programmer's code :-)), should be refactored to call the new common code snippet with parameters, which would allow them both to function as before.
- Next it should even be possible to also include one or several databases of code snippets in this pattern search. Such that the pattern search is not only done across my own project but also across one or several databases. (public or private)

--------

I think a small team of 3 - 5 programmers per feature above would be able to implement these as describe in approximately one year.
OK, so if it is true that these 3 features would save millions of hours for software developers around the world each year. Why are these features still not in our tools in 2009??

My conclusion is: Because we, mankind, have not found a good way to collaborate and push software innovation forward in faster and faster cycles each year.
One of the reasons for writing this blog, is because I want to present some of my suggestions on how we can collaborate and innovate much faster in the software industry.

lørdag den 4. juli 2009

What I want to talk about in this blog

I find technological singularity a very interesting subject. Now I made a decision to write down some of my thoughts on the topic in this blog .. and hopefully get some feedback.

This blog is going to talk about the technological development, primarily related to software development, law and politics, which will happen from today until we reach Technological Singularity. (period marked with green below)


Let me start out with some short statements. (I wanted to keep the format very short, since most of my readers are probably already familiar with Technological Singularity some of the folloing relations)

§1
Technological Singularity to me, is the point in time where computers take over the the control and development most of the technological development and innovation in the world. (Here after referred to as TechSing.)
§2
TechSing is man made, since clearly we humans must drive the development from today until we reach TechSing.
§3
It is inevitable that that we humans and our computer software will make big mistakes and dangers errors before we reach TechSing.
§4
Such errors will create resistance, which will for some periods cause certain technologies to be banned or otherwhise place obsticals for the technolgical development.
§5
The fundamental building blog, which computer software must master on its own is: It must be able to create a simplified model of systems or processes, which emulate the same behaivour of the system from the real world. (in the following I will refer to this as modelling or just make a model)

§6
The computer must be able to model on it's own and even be better than humans to do so. (until this day I have never seen the computer model on its own. Not even very, very simple system.) I think we will see this happen within the next 50 years, but I must admit that it is also possible that it will never happen)

§7
My belly feeling tells me that the acceleration towards TechSing will not happen until:
A. We humans can bring forward a sequence of the probable development steps, which we need to accomplice between 2009 until we reach TechSing.
B. We humans start a war that can be won if humans create a computer that model on its own. :-(

§8
To avoid confusion I want to point out that: For many years it has been possible for computer programs to write other computer programs. These can even be made very complex and base on dynamic statistics, libraries and databases. But somehow they are not able to compare the resulting system model to the real world and improve it's own model to match the world better.
(if the reader want to argue that Neural Networks do this, then I don't agree. Neural networks can optimize there settings to better match input and output. But they can't change the topology of the network to make it more optimal to solve a new problem)
§9
I have worked professionally with software development for more than 10 years. My experience tell me that Moors law have very little influence on when we reach TechSing. If you look at how little operating systems or search engines has change during the last 5 years, then it is obvious that it can easily be 20 years before our computers will be able help us make the most simple decisions or work on its own.
If the computer shall even be able to out perform the human in innovation and development. Then we need to make some tools which enable programmers to develop and debug VERY, VERY complex software systems in short time. Today more than half of the time in software projects are used on test and debugging. If we were able to get this below 5% then we would have taken a huge step towards TechSing.
But note: For many years there have been invested huge economical resources to reduce this, but still progress is VERY, VERY slow. It is my hope that this Blog will help us discuss and come up with some better ideas for this subject.
§10
Before I end this first blog entry, I want to list a number of software technologies, which I think may be some of the important stepping stones on the road towards TechSing. (not listed in a particular order)

  • Spellchecker software which can correct the text (without asking questions) just as good or better than if I send the same text to another human for proof reading.
    [How difficult can that be.... my guess is that we will see this problem solved within 10 years]
  • Search engines which only give one answer (the right answer).
    [This already exists, but it needs to get much better during the next 20 - 50 years]
  • Computers which does not give error messages. But instead fix the error it self and only ask if it is in doubt which of two possible options to select.
    [This already exists, but only for very simple problems, it must improve extensively next 20 - 50 years]
  • Software tools which can automatically port software applications from one platform (i.e. operating system or programming language) to another.
    [Technically this is already possible to implement, but there needs to be a profitable market for it to develop]
  • Software which can make text, graphics and speech in video search-able and relate this to all the knowledge you already have. This of cause require that the computer monitor everything you do through your computer.
    [Development in this area is already started and I think we will see the first solutions which can relate new information you acquire to stuff that you have already learned in 5 - 10 years. (through the computer that is)]
  • Applications to give you suggestions, where to buy the same or substitute product at a lower price near by every time you make a purchase.
    [I have already seen the first of such applications. But I expect that this will grow very big within the next 10 years]
  • Cars which can drive by them self and airplanes which can fly by them self's.
    [There are already experiments with both, but I doubt that we will see this in our everyday life within the next 15 years]
  • Systems which can give us cheaper insurance and minimize the security check in airports etc... If we allow the software to monitor our behaviour every day in our life and record who I meed, who I know, where I go, what I use my money on, what I read and see. Based on this the software will easily be able to establish a risk profile and allow me to save time and money.
    [Technically it is not a problem to implement this today. But there must be a market for it to happen.]
  • Systems which monitor what you use your time on and try to suggest how you can become more productive on your computer by comparing how you work to how others work. E.g. suggest which software application or services you could purchase to increase your own output.
    [This type of products will typically start in a niche, improve over time and spread to other niches. Today similar software exist, which record computer usage and behaviour in large organizations and consultants use the output to suggest action to management]
  • Systems which can optimize the way limited shared resources are distributed among members in the community. Like hospital waiting lists and traffic congestion. Today we rely on fixed rules or market economy to govern how to distribute limited resources. Potential this can be done better with an adaptive software application, which can model how different distribution models will maximize the joint community value.
    [You probably already noticed that this is similar to what a chess computer does. Maybe it would be possible to make a general toolbox, which would make it easy to create application for many such related problems. Anyway if we expect that computers shall take over all decisions in the future, then we must start with such systems. In the beginning politicians will make the rules for how the system shall govern the resources. But as the models become more complex and relate on more and more input, then the computers will be used to simulate and suggest different possible models for the politicians to choose between. In the end the computer will only give one suggested model --- it's own favorite choice ;-)) ]