PROGRAMMING WITH PYTHON. Milestone 0. Actor-Environment Baseline

emerging-mind lab (EML)
eJournal ISSN 2567-6466
29.Nov. 2017
info@emerging-mind.org
Author: Gerd Doeben-Henisch
EMail: gerd@doeben-henisch.de
FRA-UAS – Frankfurt University of Applied Sciences
INM – Institute for New Media (Frankfurt, Germany)

ZIP-FOLDER

SUMMARY

This small software package is a further step in the exercise to learn python3 while trying to solve a given theoretical problem. The logic behind this software can be described as follows:

  1. This software shall be an illustration to a simple case study from the uffmm.org site. The text of the case study is not yet finished, and this software will be extended further in the next weeks/ months…
  2. The base-version of this software offers the user a menu-driven start to define  a simple test-environment where he can investigate the behaviour of (yet) simple actors. At the end of a test run (every run can have n-many cycles, there can be m-many repetitions of a run) a simple graphic shows the summarized results.
  3. The actual actors are without any kind of perception, no memory, no computational intelligence, they are completely driven either by a fixed rule or by chance. But they are consuming energy which decreases during time and they will ‚die‘ if they can not find new energy.
  4. A more extended description of the software will follow apart from the case study as well as within the case study.
  5. The immediate next extensions will be examples of simple sensory models (smelling, tasting, touching, hearing, and viewing). Based on this   some exercises will follow with simple memory structures, simple model-building capabilities, simple language constructs, making music, painting pictures, do some arithmetic. For this the scenario has to be extended that there are at least three actors.
  6. By the way, the main motivation for doing this is philosophy of science: exercising the construction of an emerging-mind where all used parts and methods are know. Real Intelligence can never be described by its parts only; it is an implicit function, which makes the ‚whole‘ different  to the so-called ‚parts‘. As an side-effect there can be lots of interesting applications helping humans to become better humans 🙂 But, because we are free-acting systems, we can turn everything in ins opposite, turning  something good into ‚evil’…

Programming with Python. Part 6. First Demo Extended

emerging-mind.org
eJournal ISSN 2567-6466
(info@emerging-mind.org)
Author: Gerd Doeben-Henisch
gerd@doeben-henisch.de

ZIP-SW

VIDEO

Abstract

Part 4 was the first Milestone. For details see that document. In part 5 only small modifications and extensions happened.

Modifying Control Window Calls

There is only one simple call to start the control window without any other functions, this is ctrlWinStart(). All other calls do operate on features of the control window.

Extending Control Options With Percentage of Objects

A new control feature has been introduced: asking for the percentage of objects to be placed in the environment: ctrlWinLocateAct(winC). This allows a number between 1-100. The percentage of food-objects is still kept fixed with ’nfood = 1′.

The Whole Story Now

  1. A 7 x 7 grid is offered with empty cells.
  2. The user is asked for the percentage of objects in the environment (nobj) which shall randomly be distributed (black cells).
  3. 1% of cells is randomly occupied by food (i.e. here one cell) (green cell).
  4. The start position of an actor (red circle) can be selected by pointing into the grid.
  5. The number of cycles can be entered how often the environment shall be repeat the event loop.
  6. The type of behavior function of the actor can be selected: manually (:= 0), fixed (:= 1) as well as random (:= 2). With option 1 or 2 the demo is running by its own. With option 0 you have to select the next direction manually by clicking into the grid.
  7. While the demo is running it reports the actual energy level of the actor as well as the actual ‚Time of the environment‘ (which corresponds closely to the number of cycles).
  8. If either the maximum number of cycles is reached or the energy of the actor is below 0 the application will stop and after clicking into the grid  vanishes from the screen.

Asking for Percentage of Objects

ae5-Control Window asking for Percentage of Obstacles in Environment
ae5-Control Window asking for Percentage of Obstacles in Environment

Asking for Number of Cycles

AE5 - Control Window Asking for Number of Cycles to Run
AE5 – Control Window Asking for Number of Cycles to Run

ASKING FOR POINTING INTO THE GRID TO LOCATE ACTOR

AE5 - Control Window askeds to point into Grid to locate the actor
AE5 – Control Window askeds to point into Grid to locate the actor

Showing Env With Obstacles and Food

AE5 - Control Window Showing Actual Distribution of Obstacles (black), Food (green), as well as an Actor (green)
AE5 – Control Window Showing Actual Distribution of Obstacles (black), Food (green), as well as an Actor (green)

Select Behavior Type

AE5 - Control Window asking for Wanted Behavior Type of Actor (0-2)
AE5 – Control Window asking for Wanted Behavior Type of Actor (0-2)

Final Stage

AE5 - windows shows Grid with final state
AE5 – windows shows Grid with final state
AE5 - Control Windows Comments Final Stage with no moe Energy for Actor
AE5 – Control Windows Comments Final Stage with no moe Energy for Actor

The Importance of Freedom and Creativity

Although this environment is very simple, it can demonstrate a lot of basic ‚verities‘. It shows directly the inappropriateness of a fixed behavior even in a static environment. This implies that a non-fixed behavior realized as a random behavior is in principle strong enough to find a solution (if there is any). Whether a solution is possible or not depends from the available time which in turn depends form the available energy.

If one interprets‚random behavior‘ as a behavior based on freedom and creativity then one has here a strong motivation that a society based on freedomand creativityhas (other ‚bad‘ factors neutralized) the best chances to master an unknown future.

How to Continue

You can continue with Part 1 ‚How to Program with Python under ubuntu 14.04?‘

Programming with Python. Part 4. First Demo complete

emerging-mind.org
eJournal ISSN 2567-6466
(info@emerging-mind.org)
Gerd Doeben-Henisch
gerd@doeben-henisch.de
October 20, 2017

PDF

ZIP-SW

1 First Milestone Reached

The basic idea from the beginning was to check whether it is possible to
program in python a simple actor-environment demo with a graphical user interface (GUI).

During the parts 1-3 it could be proved step wise that all the wanted
features could be realized.

Clearly the actual code is far from being elegant nor is he completely
explained. But everybody can inspect the code delivered as a ZIP-folder.

This first GUI-based demo contains the following features:

1. A 7 x 7 grid is offered with empty cells.
2. 20% are randomly occupied by obstacles (black cells).
3. 1% of cells is randomly occupied by food (i.e. here one cell) (green
cell).
4. The start position of an actor (red circle) can be selected by pointing
into the grid.
5. The number of cycles can be entered how often the environment shall
be repeat the event loop.
6. The type of behavior function of the actor can be selected: manually
(:= 0), fixed (:= 1) as well as random (:= 2). With option 1 or 2 the
demo is running by its own. With option 0 you have to select the next
direction manually by clicking into the grid.
7. While the demo is running it reports the actual energy level of the actor
as well as the actual ’Time of the environment’ (which corresponds
closely to the number of cycles).
8. If either the maximum number of cycles is reached or the energy of
the actor is below 0 the application will stop and after 10 s vanish from
the screen.

2 How to Continue

There are many options how to continue. Actually the following ones are
considered:

ACTOR-ENVIRONMENT FEATURES

1. Enhance the actual version with with e.g. offering the selection of
more parameters to be eligible.
2. Allow multiple actors simultaneously.
3. Allow the automatic repetition of a whole experiment over n-times.
4. Allow storing of results and statistical evaluations.
5. Start explorations of different behavior functions like genetic algorithms,
classifiers, reinforcement, learning similar to alpha go zero,
etc.
6. Enhance perceptual structures and motor patterns.
7. Check different cognitive architectures
8. Enable basic dialogues with the environment
9. …

COMPUTING FEATURES

1. Transfer the windows implementation on ubuntu 14.04 too.
2. Compare the different versions.
3. Integrate the actor-environment pattern within the ros-architecture
4. Allow real-world sensors and actors especially for robotics, for sound
art work, for picture art work, for sculpture art works as well as for text
art work)
5. Rewrite the actor-environment demo as a distributed process network.
6. Realize a first demo of a Local Learning Environment
7. …

Continue to part 5

Programming with Python. Part 3. Different Behavior Functions for Experiments 1-4

emerging-mind.org eJournal ISSN 2567-6466
(info@emerging-mind.org)
Gerd Doeben-Henisch
gerd@doeben-henisch.de
October 18, 2017

Contents
1 Problem to be Solved 1
2 How to Program 2
2.1 Empty Behavior Function . . . . . . . . . . . . . . . . . . . . 2
2.2 Fixed Behavior Function . . . . . . . . . . . . . . . . . . . . . 7
2.3 Random Behavior Function . . . . . . . . . . . . . . . . . . . 10
2.4 Food-Intake Function . . . . . . . . . . . . . . . . . . . . . . . 11

Abstract

According to the actual requirements we have to prepare 4 different
types of behavior functions

1 Problem to be Solved

In part 2 we have mentioned the following 4 types of behavior functions
which we need:
1. The behavior function phi of the actor is ’empty’  phi = 0. The actor
functions like an ’envelope’: you can see the body of the actor on the
screen, but his behavior depends completely from the inputs given by
a human person.
1
2. The behavior function  of the actor is driven by one, fixed rule  phi(i) = const. The actor will do always the same, independent from the environment.
3. The behavior function  of the actor is driven by a source of random
values; therefore the output is completely random phi(i) = random.
4. The behavior function  of the actor is driven by a source of random
value but simultaneously the actor has some simple memory  remembering the last n steps before finding food. Therefore the behavior
is partially random, partially directed depending from the distance
to the goal food:  phi : I  x IS —> IS  x O with internal states IS as
a simple memory which can collect data from the last n-many steps
before reaching the goal. If the memory  is not empty then it can
happen, that the actual input maps the actual memory-content and
then the memory gives the actor a ’direction’ for the next steps.

In this part 3 we will program the cases 1-3 and we will implement a
food-intake function which will increase the energy level again.

For more see the attached PDF-file.

For all the python sources see the attached ZIP-file.

Continue to part 4 (First Milestone)