Part 4 was the first Milestone. For details see that document. In part 5 only small modifications and extensions happened.
Modifying Control Window Calls
There is only one simple call to start the control window without any other functions, this is ctrlWinStart(). All other calls do operate on features of the control window.
Extending Control Options With Percentage of Objects
A new control feature has been introduced: asking for the percentage of objects to be placed in the environment: ctrlWinLocateAct(winC). This allows a number between 1-100. The percentage of food-objects is still kept fixed with ’nfood = 1′.
The Whole Story Now
A 7 x 7 grid is offered with empty cells.
The user is asked for the percentage of objects in the environment (nobj) which shall randomly be distributed (black cells).
1% of cells is randomly occupied by food (i.e. here one cell) (green cell).
The start position of an actor (red circle) can be selected by pointing into the grid.
The number of cycles can be entered how often the environment shall be repeat the event loop.
The type of behavior function of the actor can be selected: manually (:= 0), fixed (:= 1) as well as random (:= 2). With option 1 or 2 the demo is running by its own. With option 0 you have to select the next direction manually by clicking into the grid.
While the demo is running it reports the actual energy level of the actor as well as the actual ‚Time of the environment‘ (which corresponds closely to the number of cycles).
If either the maximum number of cycles is reached or the energy of the actor is below 0 the application will stop and after clicking into the grid vanishes from the screen.
Asking for Percentage of Objects
Asking for Number of Cycles
ASKING FOR POINTING INTO THE GRID TO LOCATE ACTOR
Showing Env With Obstacles and Food
Select Behavior Type
The Importance of Freedom and Creativity
Although this environment is very simple, it can demonstrate a lot of basic ‚verities‘. It shows directly the inappropriateness of a fixed behavior even in a static environment. This implies that a non-fixed behavior realized as a random behavior is in principle strong enough to find a solution (if there is any). Whether a solution is possible or not depends from the available time which in turn depends form the available energy.
If one interprets‚random behavior‘ as a behavior based on freedom and creativity then one has here a strong motivation that a society based on freedomand creativityhas (other ‚bad‘ factors neutralized) the best chances to master an unknown future.
The basic idea from the beginning was to check whether it is possible to
program in python a simple actor-environment demo with a graphical user interface (GUI).
During the parts 1-3 it could be proved step wise that all the wanted
features could be realized.
Clearly the actual code is far from being elegant nor is he completely
explained. But everybody can inspect the code delivered as a ZIP-folder.
This first GUI-based demo contains the following features:
1. A 7 x 7 grid is offered with empty cells.
2. 20% are randomly occupied by obstacles (black cells).
3. 1% of cells is randomly occupied by food (i.e. here one cell) (green
4. The start position of an actor (red circle) can be selected by pointing
into the grid.
5. The number of cycles can be entered how often the environment shall
be repeat the event loop.
6. The type of behavior function of the actor can be selected: manually
(:= 0), fixed (:= 1) as well as random (:= 2). With option 1 or 2 the
demo is running by its own. With option 0 you have to select the next
direction manually by clicking into the grid.
7. While the demo is running it reports the actual energy level of the actor
as well as the actual ’Time of the environment’ (which corresponds
closely to the number of cycles).
8. If either the maximum number of cycles is reached or the energy of
the actor is below 0 the application will stop and after 10 s vanish from
2 How to Continue
There are many options how to continue. Actually the following ones are
1. Enhance the actual version with with e.g. offering the selection of
more parameters to be eligible.
2. Allow multiple actors simultaneously.
3. Allow the automatic repetition of a whole experiment over n-times.
4. Allow storing of results and statistical evaluations.
5. Start explorations of different behavior functions like genetic algorithms,
classifiers, reinforcement, learning similar to alpha go zero,
6. Enhance perceptual structures and motor patterns.
7. Check different cognitive architectures
8. Enable basic dialogues with the environment
1. Transfer the windows implementation on ubuntu 14.04 too.
2. Compare the different versions.
3. Integrate the actor-environment pattern within the ros-architecture
4. Allow real-world sensors and actors especially for robotics, for sound
art work, for picture art work, for sculpture art works as well as for text
5. Rewrite the actor-environment demo as a distributed process network.
6. Realize a first demo of a Local Learning Environment
According to the actual requirements we have to prepare 4 different
types of behavior functions
1 Problem to be Solved
In part 2 we have mentioned the following 4 types of behavior functions
which we need:
1. The behavior function phi of the actor is ’empty’ phi = 0. The actor
functions like an ’envelope’: you can see the body of the actor on the
screen, but his behavior depends completely from the inputs given by
a human person.
2. The behavior function of the actor is driven by one, fixed rule phi(i) = const. The actor will do always the same, independent from the environment.
3. The behavior function of the actor is driven by a source of random
values; therefore the output is completely random phi(i) = random.
4. The behavior function of the actor is driven by a source of random
value but simultaneously the actor has some simple memory remembering the last n steps before finding food. Therefore the behavior
is partially random, partially directed depending from the distance
to the goal food: phi : I x IS —> IS x O with internal states IS as
a simple memory which can collect data from the last n-many steps
before reaching the goal. If the memory is not empty then it can
happen, that the actual input maps the actual memory-content and
then the memory gives the actor a ’direction’ for the next steps.
In this part 3 we will program the cases 1-3 and we will implement a
food-intake function which will increase the energy level again.
1 Problem to be Solved 1
2 How to Program 2
2.1 Continuation with Timer instead of Console Interaction; Quit . 2
2.2 Inserting an Actor by Mouse-Click . . . . . . . . . . . . . . . 3
2.3 Putting Things Together . . . . . . . . . . . . . . . . . . . . . 6
Taking the proposal from Part 1 for an environment-actor demo
and enhance it with replacing all console interactions with mouse-click
In this part 2 (see the attached PDF for Details) the last version ‚gdh-win10.py‘ will be improved by replacing all console interactions by mouse-clicks or by time-delay functions. There are also some minor improvements of the files environment.py and acctor.py.
What you see here is the content of the attached PDF. The text is somehow a protocoll of an experiment to learn the programming language python from scratch. No pre-knowledge, no tools, no teachers! If you are in the same situation and you like to learn python since long, here you can enter and follow the steps.
STARTING WITH A REAL PROBLEM
Many years I opposed to learn python because in my view the language is really disgusting from the point of view of mathematics. But, one has to accept, that python has made its way in many areas including technology, science, and the arts. And because I needed urgently an acepptable software environment for all my theories and experiments there is sometimes the day of decision: you have to start or you can’t show many things.
There is another point: meanwhile I identified as main framework for our theories (which we are discussing on uffmm.org) the combination of ubuntu + ros (robot operating system) + tensoflow. Here python is the main langauge besides C/C++. Furthermore I detected many applications for our art projects, which are also strongly supported by python. Thus the motivation became stronger than my disgust about this quirky style of thinking.
In the attached PDF you can see how I battled through the python-jungle in 3 days producing a first outcome.
After these 3 days I would say, yes, python is a worthful tool to work with. I am convinced that we can solve most of our problems with it.
Therefore it is highly probable that you will find here more documents with python programs in the future. The idea is indeed to set up a local learning environment (LLE) which is small, flexible, portable, and very powerful. It should enable really intelligent machines to help people, not to substitute people. The future will belong to new men-machine couplings.