emerging-mind lab (EML) eJournal ISSN 2567-6466 29.Nov. 2017 email@example.com Author: Gerd Doeben-Henisch EMail: firstname.lastname@example.org FRA-UAS – Frankfurt University of Applied Sciences INM – Institute for New Media (Frankfurt, Germany)
This small software package is a further step in the exercise to learn python3 while trying to solve a given theoretical problem. The logic behind this software can be described as follows:
This software shall be an illustration to a simple case study from the uffmm.org site. The text of the case study is not yet finished, and this software will be extended further in the next weeks/ months…
The base-version of this software offers the user a menu-driven start to define a simple test-environment where he can investigate the behaviour of (yet) simple actors. At the end of a test run (every run can have n-many cycles, there can be m-many repetitions of a run) a simple graphic shows the summarized results.
The actual actors are without any kind of perception, no memory, no computational intelligence, they are completely driven either by a fixed rule or by chance. But they are consuming energy which decreases during time and they will ‚die‘ if they can not find new energy.
A more extended description of the software will follow apart from the case study as well as within the case study.
The immediate next extensions will be examples of simple sensory models (smelling, tasting, touching, hearing, and viewing). Based on this some exercises will follow with simple memory structures, simple model-building capabilities, simple language constructs, making music, painting pictures, do some arithmetic. For this the scenario has to be extended that there are at least three actors.
By the way, the main motivation for doing this is philosophy of science: exercising the construction of an emerging-mind where all used parts and methods are know. Real Intelligence can never be described by its parts only; it is an implicit function, which makes the ‚whole‘ different to the so-called ‚parts‘. As an side-effect there can be lots of interesting applications helping humans to become better humans 🙂 But, because we are free-acting systems, we can turn everything in ins opposite, turning something good into ‚evil’…
Part 4 was the first Milestone. For details see that document. In part 5 only small modifications and extensions happened.
Modifying Control Window Calls
There is only one simple call to start the control window without any other functions, this is ctrlWinStart(). All other calls do operate on features of the control window.
Extending Control Options With Percentage of Objects
A new control feature has been introduced: asking for the percentage of objects to be placed in the environment: ctrlWinLocateAct(winC). This allows a number between 1-100. The percentage of food-objects is still kept fixed with ’nfood = 1′.
The Whole Story Now
A 7 x 7 grid is offered with empty cells.
The user is asked for the percentage of objects in the environment (nobj) which shall randomly be distributed (black cells).
1% of cells is randomly occupied by food (i.e. here one cell) (green cell).
The start position of an actor (red circle) can be selected by pointing into the grid.
The number of cycles can be entered how often the environment shall be repeat the event loop.
The type of behavior function of the actor can be selected: manually (:= 0), fixed (:= 1) as well as random (:= 2). With option 1 or 2 the demo is running by its own. With option 0 you have to select the next direction manually by clicking into the grid.
While the demo is running it reports the actual energy level of the actor as well as the actual ‚Time of the environment‘ (which corresponds closely to the number of cycles).
If either the maximum number of cycles is reached or the energy of the actor is below 0 the application will stop and after clicking into the grid vanishes from the screen.
Asking for Percentage of Objects
Asking for Number of Cycles
ASKING FOR POINTING INTO THE GRID TO LOCATE ACTOR
Showing Env With Obstacles and Food
Select Behavior Type
The Importance of Freedom and Creativity
Although this environment is very simple, it can demonstrate a lot of basic ‚verities‘. It shows directly the inappropriateness of a fixed behavior even in a static environment. This implies that a non-fixed behavior realized as a random behavior is in principle strong enough to find a solution (if there is any). Whether a solution is possible or not depends from the available time which in turn depends form the available energy.
If one interprets‚random behavior‘ as a behavior based on freedom and creativity then one has here a strong motivation that a society based on freedomand creativityhas (other ‚bad‘ factors neutralized) the best chances to master an unknown future.
The basic idea from the beginning was to check whether it is possible to
program in python a simple actor-environment demo with a graphical user interface (GUI).
During the parts 1-3 it could be proved step wise that all the wanted
features could be realized.
Clearly the actual code is far from being elegant nor is he completely
explained. But everybody can inspect the code delivered as a ZIP-folder.
This first GUI-based demo contains the following features:
1. A 7 x 7 grid is offered with empty cells.
2. 20% are randomly occupied by obstacles (black cells).
3. 1% of cells is randomly occupied by food (i.e. here one cell) (green
4. The start position of an actor (red circle) can be selected by pointing
into the grid.
5. The number of cycles can be entered how often the environment shall
be repeat the event loop.
6. The type of behavior function of the actor can be selected: manually
(:= 0), fixed (:= 1) as well as random (:= 2). With option 1 or 2 the
demo is running by its own. With option 0 you have to select the next
direction manually by clicking into the grid.
7. While the demo is running it reports the actual energy level of the actor
as well as the actual ’Time of the environment’ (which corresponds
closely to the number of cycles).
8. If either the maximum number of cycles is reached or the energy of
the actor is below 0 the application will stop and after 10 s vanish from
2 How to Continue
There are many options how to continue. Actually the following ones are
1. Enhance the actual version with with e.g. offering the selection of
more parameters to be eligible.
2. Allow multiple actors simultaneously.
3. Allow the automatic repetition of a whole experiment over n-times.
4. Allow storing of results and statistical evaluations.
5. Start explorations of different behavior functions like genetic algorithms,
classifiers, reinforcement, learning similar to alpha go zero,
6. Enhance perceptual structures and motor patterns.
7. Check different cognitive architectures
8. Enable basic dialogues with the environment
1. Transfer the windows implementation on ubuntu 14.04 too.
2. Compare the different versions.
3. Integrate the actor-environment pattern within the ros-architecture
4. Allow real-world sensors and actors especially for robotics, for sound
art work, for picture art work, for sculpture art works as well as for text
5. Rewrite the actor-environment demo as a distributed process network.
6. Realize a first demo of a Local Learning Environment
According to the actual requirements we have to prepare 4 different
types of behavior functions
1 Problem to be Solved
In part 2 we have mentioned the following 4 types of behavior functions
which we need:
1. The behavior function phi of the actor is ’empty’ phi = 0. The actor
functions like an ’envelope’: you can see the body of the actor on the
screen, but his behavior depends completely from the inputs given by
a human person.
2. The behavior function of the actor is driven by one, fixed rule phi(i) = const. The actor will do always the same, independent from the environment.
3. The behavior function of the actor is driven by a source of random
values; therefore the output is completely random phi(i) = random.
4. The behavior function of the actor is driven by a source of random
value but simultaneously the actor has some simple memory remembering the last n steps before finding food. Therefore the behavior
is partially random, partially directed depending from the distance
to the goal food: phi : I x IS —> IS x O with internal states IS as
a simple memory which can collect data from the last n-many steps
before reaching the goal. If the memory is not empty then it can
happen, that the actual input maps the actual memory-content and
then the memory gives the actor a ’direction’ for the next steps.
In this part 3 we will program the cases 1-3 and we will implement a
food-intake function which will increase the energy level again.