Difference between revisions of "Richard Velden Learning Log"
Line 44: | Line 44: | ||
===My view on: Scenario planning and systems thinking=== | ===My view on: Scenario planning and systems thinking=== | ||
This was not the first time we made models of reality. Simple models describe the world around us as static objects while more complex ones show us interaction. Some models show both just like the system map model does. But what makes system maps so distinctive? What I’ve noticed is that the systems map model is the first I’ve encountered which introduced the notion of stability. The model itself has a major resemblance with biology | This was not the first time we made models of reality. Simple models describe the world around us as static objects while more complex ones show us interaction. Some models show both just like the system map model does. But what makes system maps so distinctive? What I’ve noticed is that the systems map model is the first I’ve encountered which introduced the notion of stability. The model itself has a major resemblance with biology which also focusses around similar (eco)systems. The actual innovation therefore is that this kind of model can also be used to reflect other parts of reality. It actually shows us that our civilization resembles an eco-system. | ||
As far as the scenario construction procedure being a learning tool, as well as a way to express oneself and convince others, this can be also said of other models. Simply put; a graphical or simplified description of the real world is always nice for communicating thoughts, and refining those. The real extra provided by scenario planning was the fact that we make a real-life story around a scenario. This actual instantiation of an alternate timeline by means of written text makes a scenario '''look''' more plausible. Even stronger ways to convince people of the plausibility of a scenario is by making animations, movies or even better a complete virtual reality scenario. | As far as the scenario construction procedure being a learning tool, as well as a way to express oneself and convince others, this can be also said of other models. Simply put; a graphical or simplified description of the real world is always nice for communicating thoughts, and refining those. The real extra provided by scenario planning was the fact that we make a real-life story around a scenario. This actual instantiation of an alternate timeline by means of written text makes a scenario '''look''' more plausible. Even stronger ways to convince people of the plausibility of a scenario is by making animations, movies or even better a complete virtual reality scenario. |
Revision as of 11:16, 13 April 2005
Reflection paper: The Future of Grid Computing
By: Richard Velden
The subject – Grid Computing
At first we only had a vague idea what grid computing actually was. During the first round of selecting subjects a few didn’t even know about the existence of grid. Too be honest, it wasn’t our number one pick. Surprisingly for some of us, the lecturer actually picked the grid computing option for us to do the scenarios on.
Because we only had very limited information about the subject finding a lot research questions was the easy part. More detailed questions (acquired once diving deeper into a subject) were defined later on.
Research Phase:
Inspired by an online flash presentation I saw a few months ago I already had something that could have been part of a nice scenario for grid computing.
It describes in detail a series of events that result in an information grid dominated by Google (which took over Amazon to become GoogleZon).
With this idea in mind I started a more ‘what if’ research approach, not really caring about current status of grid technology and internet infrastructure, subjects that were properly investigated by the other group members. I therefore focused on the future consumer applications for grid.
What I already knew was that parallel computing is mainly being used for research. Simulations, visualizations, advanced data-mining, signal analysis etc. This was not really the most interesting part of grid computing. The real idea of grid is to harness both the high performance power of idle clusters and supercomputers, as well as that of stand-alone PCs which are distributed over the entire world, with as ultimate goal to offer high performance resources for everybody.
The main problem for consumer applications was to find a proper application that needs high performance computing resources, and is exploitable for parallel execution. Even more important is whether such applications exist which could offer value to the customer.
Applications suggested by Google: Stock market simulations. Even the almighty Google couldn’t help me any further. Because of my own limited insight in grid computing we’ve tried to get some real expert opinions by requesting an interview with one of my former computer science professors who is involved in the High Performance Computing department at LIACS. This interview was scheduled only 2 days before our deadline, and was even put away one day further. Due to time constraints we have skipped the interview.
Luckily we were able to get a short presentation and interview from Jing’s boyfriend. A computer science student who was more into grid-computing then I was. The presentation was helpful in providing a view of grid computing right now and possibly in the future. It was a rather pessimistic view in my opinion, but it was consistent with other expert opinions on grid computing found on the internet.
Creation of a systems map:
The lecture on systems maps reminded me of biology courses from high school. And because I used to like biology, I also liked systems maps. I am a rather simple person.
The ‘War on Drugs’ gave a good impression on how to construct our own system. It showed what our ‘freedoms’ are, and that ‘dollars American’ are the motivator for almost everything. It was also very nice to see how even the best minds of the world can fail.
As a result of the lecture we reconstructed our initial systems map into a more demand/supply driven one.
Systems map part two:
Constructing a systems map from scratch with an entire group (which has not done this before) was an ill conceived idea. The initial map was incomplete, focused on a subset of the actual problem and was too much focused on some tech terms which actually did not really matter. More preparation was needed obviously because most arguments within the group were only partially based on real information or were misinterpreted. Bickering on a subject which you haven’t studied well is futile and only generates large amounts of stress. Not knowing enough about various subjects was due to a very tight schedule.
In our next ‘systems map session’ we all beforehand tried out to make a systems map of our own, instantiating a personal view on the subject without bothersome comments of others. This resulted in the same amount of arguments and bickering, mainly about terminology and scope of the systems map. But at least now we sort-off knew where we were talking about. After some fierce fights, we finally ended with a relaxing cup of coffee. When we finished our coffee everything was settled and we magically had consensus about the systems map.
Key driving forces:
We didn’t want to choose just two main driving forces. But also differentiate on various other aspects, although those were almost directly derivable from those 2 main driving forces. These various aspects were the backbone structure for our final scenarios. Scenario choices: One scenario was put out because it was a bit dull, leaving us with the other three.
My view on: Scenario planning and systems thinking
This was not the first time we made models of reality. Simple models describe the world around us as static objects while more complex ones show us interaction. Some models show both just like the system map model does. But what makes system maps so distinctive? What I’ve noticed is that the systems map model is the first I’ve encountered which introduced the notion of stability. The model itself has a major resemblance with biology which also focusses around similar (eco)systems. The actual innovation therefore is that this kind of model can also be used to reflect other parts of reality. It actually shows us that our civilization resembles an eco-system.
As far as the scenario construction procedure being a learning tool, as well as a way to express oneself and convince others, this can be also said of other models. Simply put; a graphical or simplified description of the real world is always nice for communicating thoughts, and refining those. The real extra provided by scenario planning was the fact that we make a real-life story around a scenario. This actual instantiation of an alternate timeline by means of written text makes a scenario look more plausible. Even stronger ways to convince people of the plausibility of a scenario is by making animations, movies or even better a complete virtual reality scenario.
This is also the weakness of scenarios on themselves. A good writer can make people believe in a certain scenario even if the actual research behind it is incorrect.
Finally the follow up:
We should have concentrated more on the Cell architecture. And although I personally focused a little on that rather technical subject, I was quickly convinced/overruled by the rest of the group to ‘stick to grid’. Being a trapped in the same time-space as the rest of the world I wasn’t able to take a detailed hardware look at Cell. When given more time we could have investigated the difference between theoretical and ‘actual’ computing performance of this architecture.
The initial Cell research findings were not really in favor of Cells potential. Although press releases told us Cell was going to be very hot. The same was true a year before the launch of Sony’s first Play-station, with it’s over-rated ‘emotions-engine’. It seems that they already have a record of overstating true performance compared to real achievable performance. Also IBM is already selling ‘Grid-Toolkits’ which in fact are just front ends for IBM clusters, a technology far less complex than grid. It seems like the big companies are just over-hyping their new technologies.
To truly asses the power of Cell we should dive into the hardware details, and look at somewhat comparable systems. I feel Sony press releases, and IBM marketing cannot really be trusted (both have a significant stake in the Cell Architecture together with Toshiba). This was also a reason (maybe a little unfounded) to quickly abandon the Cell Architecture as a grid component.