01 okladka DANFOSS.indd - Pomiary Automatyka Robotyka

Transkrypt

01 okladka DANFOSS.indd - Pomiary Automatyka Robotyka
nauka
Automatic adaptation of Human-Computer
Interaction system
Jaromir Przybyło
AGH University of Science and Technology, Department of Automatics
Abstract: Human-computer interaction (HCI) is an emerging field
of science aimed at providing natural ways for humans to use
computers. Among many systems are vision-based approaches,
utilizing face tracking and facial action recognition. Many sources
of variation in facial appearance exists, which make recognition a
challenging task. Often, uncomfortable manual calibration of the
system is required. Therefore, in our opinion the key issue is the
automatic adaptation of the machine to human rather than vice
versa.
Keywords: human-computer interaction; image and video
processing; adaptation and calibration
1. Introduction
Typical input devices, such as QWERTY keyboard and a
mouse, remains the dominant way of human-computer
interaction. Unfortunately they are not designed for
people with severe physical disabilities. People who are
quadriplegic – as a result of cerebral palsy, traumatic
brain injury, or stroke – often have great difficulties or
even cannot use mouse or keyboard. In such situations
facial actions and small head movements can be viewed as
alternative way of communication with machines
(computer, electric wheel chair, etc.).
There are various approaches to build mouse
alternatives. Some of them use infrared emitters that are
attached to the user's glasses, head band, or cap [7]. Other
try to measure eyeball position changes and determine
gaze direction [1, 14]. Also, there are attempts to leverage
electroencephalograms for human-machine interaction [5].
Typically these systems use specialized equipment such as:
electrodes, goggles and helmets which are uncomfortable
to wear and also may cause hygienic problems.
On the other hand, optical observation gives
possibility of easy adaptation to serve the special needs of
people with various disabilities and does not require
uncomfortable accessories. Therefore, vision based systems
are promising solution for HCI interface. The potential of
other applications of such systems exists as well (i.e.
entertainment and game systems, AV content analysis,
etc).
2. The need of machine adaptation
There are many approaches to vision-based humancomputer interaction. One of them is the ,,Camera Mouse"
86
Pomiary automatyka Robotyka 12/2011
system [2], which provides computer access for people
with severe disabilities. User's movements are translated
into the movements of the mouse pointer on the screen.
System tracks facial features such as the tip of the user's
nose or finger. The visual tracking algorithm is based on
correlation technique and requires manual selection of
tracked features.
Another, widely known, example is CAMSHIFT algorithm [4] used as a computer interface for controlling
commercial computer games and for exploring immersive
3D graphic worlds. It operates on color probability distributions derived from video frame sequences. The efficiency
of the algorithm depends on created skin-color model.
Many other face localization and tracking algorithms use a
skin color model as well.
There are many sources of variation in facial appearance. They can be categorized [9] into two groups – intrinsic factors related to the physical nature of the face (identity, age, sex, facial expression) and extrinsic factors due
to interaction of the light with the face and observer (illumination, viewing geometry, imaging process, occlusion,
shadowing).
Above examples, show typical problems in designing
robust vision-based HCI system. Careful manual intervention during initialization of the algorithm and recalibration during execution are required.
On the other hand, human object recognition seems effortless. From generic knowledge of an object, one can easily recognize novel instances of the object (for example:
smile, sadness, etc). The main question is how this knowledge is encoded and used?
In this paper, we propose a vision-based automatic calibration method designed to initialize HCI system or
adapt its parameters during execution. It uses generic
model of human face to detect facial features. After that
system can learn other visual cues, important to recognize
facial actions.
3. Algorithm outline
In order to minimize manual intervention during initialization of the HCI system, the blink gesture detection is
performed. Using only blink signal, allows people with severe physical disabilities to perform initialization without
help or with minimum assistance of caretakers. Te user
has to stay still in front of the camera and blink.
Proposed algorithm makes use of both – static image
analysis and motion detection performed simultaneously.
Algorithm outline is given in Fig. 1 (details are in [15]).
nauka
After preprocessing stage (color conversion, median filtering),
motion detection
is performed
in luminance
compoAfter preprocessing
stage
(color conversion,
median
filteAfter
preprocessing
stage
(color
conversion,
median
filteAfter
preprocessing
stage
(color
conversion,
median
filtenent
by
creating
Motion
Energy
Images
–
MEI
[3].
MEI
ring), motion detection is performed in luminance comporing),
motion
detection
is
performed
in
luminance
comporing),
motion
detection
is
performed
in
luminance
comporepresentation
shows
where
in
the
image
the
motion
ocnent by creating Motion Energy Images – MEI [3]. MEI
nent
by
creating
Motion
Energy
Images
–– MEI
[3].
MEI
nent
by
creating
Motion
Energy
Images
MEI
[3].
MEI
curs.
The
delay
parameter
defines
the
temporal
extent
of
representation shows where in the image the motion ocrepresentation
shows
where
in
the
image
the
motion
ocrepresentation
shows
where
in
the
image
the
motion
ocacurs.
movement
and
is
set
to
match
the
typical
eye
blink
peThe delay parameter defines the temporal extent of
curs.
The
delay parameter
defines
the
temporal
extent
of
curs.
The
parameter
definesthe
thetypical
temporal
extent
of
riod.
Sum delay
ofand
the
giveseye
information
a movement
is MEI
set torepresentation
match
blink peaa movement
and
is
set
to
match
the
typical
eye
blink
pemovement
and
is
set
to
match
the
typical
eye
blink
peabout
how
big
motion
is
and
it
is
used
to
detect
head
riod. Sum of the MEI representation gives information
riod.
Sum of
the
MEI
gives
information
riod.
the
MEI isrepresentation
representation
gives
information
movements
or
scene
changes.
about Sum
how of
bigsignificant
motion
and lighting
it is used
to detect
head
about
how
big
motion
is
and
it
is
used
to
detect
head
about
how
big
motion
is
and
it
is
used
to
detect
head
The
aim
of
object
analysis
stage
is
to
remove
objects
movements or significant scene lighting changes.
movements
or
significant
scene
lighting
changes.
movements
or
significant
scene
lighting
changes.
fromThe
binary
MEI
image
which
do
not
fulfill
assumed
criteaim of object analysis stage is to remove objects
The
aim
of
analysis
stage
is to
remove
objects
The
aimsize,
of object
object
analysis
stage
toartifacts
remove
objects
ria
etc.).
This
stepdo
removes
generatfrom(object
binary
MEI
image
which
not is
fulfill
assumed
critefrom
binary
MEI
image
which
do
not
fulfill
assumed
critefrom
binary
MEI
image
which
do
not
fulfill
assumed
criteed
by
image
noise
or
small
head
movements.
All
remainria (object size, etc.). This step removes artifacts generatria
(object
size,
etc.).
This
step
removes
artifacts
generatria
(object
size,
etc.).
This
step
removes
artifacts
generating
objects
are
considered
as
possible
eye
candidates.
To
ed by image noise or small head movements. All remained
by
image
noise
or
small
head
movements.
All
remained
by
image
noise
or
small
head
movements.
All
remainselect
proper
pair
of
objects
which
are
most
likely
located
ing objects are considered as possible eye candidates. To
ing
objects
are
considered as
possible
eye candidates.
To
ing
objects
are
aswhich
possible
candidates.
To
in
eye
regions
ofconsidered
the
image,
additional
analysis
on
select
proper
pair
of objects
are eye
most
likelybased
located
select
proper
pair
of
objects
which
are
most
likely
located
select
proper
pair
of
objects
which
are
most
likely
located
face
anatomy
is
performed
(the
distance
and
the
angle
of
in eye regions of the image, additional analysis based on
in
eye
regions
of
the
image,
additional
analysis
based
on
in
eye
regions
of
the
image,
additional
analysis
based
on
line
passing
through
eye
candidates
must
lie
within
asface anatomy is performed (the distance and the angle of
face
anatomy
is
(the
and
angle
of
face
anatomy
is performed
performed
(the distance
distance
andliethe
the
angleasof
sumed
limits).through
line passing
eye candidates
must
within
line
passing
through
eye
candidates
must
lie
within
asline
passing
through
eye
candidates
must
lie
within
assumed limits).
sumed
sumed limits).
limits).
Fig. 1. Overview of the algorithm
Rys.1.1.
Fig.
Fig. 1.
Fig.
Rys.1.1.
Rys.To
1.
Rys. 1.
Schemat
algorytmu
Overview ogólny
of the algorithm
Overview of the algorithm
Overview
of
the
algorithm
Schemat ogólny algorytmu
detect blink
event
the remaining pair of eye candiSchemat
ogólny
algorytmu
Schemat ogólny algorytmu
dates
exists
the same
image region
consecuTomust
detect
blinkonevent
the remaining
pairon
of Neye
candiTo detect
blink
the
remaining pair
of
candidetect
blinkonevent
event
the
pairon
of Neye
eye
canditiveTo
frames.
Approximate
eye remaining
positions
obtained
are
used
dates
must
exists
the same
image region
consecudates
must
exists
on
the
same
image
region
on
N
consecudates
mustregion
exists
the same
image for
region
on N are
consecuto define
ofoninterests
accurate
eye used
and
tive
frames.
Approximate
eye(ROIs)
positions
obtained
tive
frames.
Approximate
eye
positions
obtained
are
used
tive
frames.
Approximate
eye(ROIs)
positions
obtained
are
used
nostril
localization.
The blink
event for
triggers
feature
to
define
region
of interests
accurate
eye(eye,
and
to
define
region
(ROIs) for
accurate
eye
and
to
define
region of
of interests
interests
accurate
eye(eye,
and
nostril)
localization
algorithm
to for
execute
until
head
nostril
localization.
The
blink(ROIs)
event
triggers
feature
nostril
localization.
The
blink
event
triggers
feature
(eye,
nostril
localization.
The
blink event
(eye,
movement
or significant
scene
lighting
changefeature
is
detected.
nostril)
localization
algorithm
to triggers
execute
until
head
nostril)
localization
algorithm to
until
head
nostril)
localization
to execute
execute
until
head
In such situation
(for algorithm
example
when
user
shakes
head
or
movement
or significant
scene lighting
change
is
detected.
movement
or
significant
scene
lighting
change
is
detected.
movement
or significant
scene
lighting
change
is blink.
detected.
move)
detectors
are
suspended
until
next
In
suchfeature
situation
(for example
when user
shakes
head or
In
such
situation
(for
example
when user
shakes head
In
such
situation
(fordetection
example
user
head or
or
Eyefeature
and
nostril
iswhen
performed
on computed
move)
detectors
are suspended
untilshakes
next
blink.
move)
feature
detectors
are
suspended
until
next
blink.
move)
feature
detectors
are suspended
until next
blink.
ROIs
using
luminance
andis chrominance
components.
Eye
andboth
nostril
detection
performed
on
computed
Eye and
nostril
detection
is performed
on
computed
andboth
nostril
detection
performed
on
computed
TwoEye
probability
maps
are created,
each of them
based on
ROIs
using
luminance
andis chrominance
components.
ROIs
using
both
luminance
and
chrominance
components.
ROIs
using
bothrepresentation.
luminance
andThe
chrominance
components.
different
image
probability
map
from
Two
probability
maps
are created,
each
of them
based
on
Two
probability
maps
created, each
of
based
on
Two
probability
mapsonare
are
of them
them
based
on
the chroma
is based
thecreated,
observation
that
high
Cbfrom
and
different
image
representation.
Theeach
probability
map
different
image
representation.
The
probability
map
from
different
image
representation.
The the
probability
map
low chroma
Cr values
beon
found
around
eyes
Since
nothe
is can
based
the observation
that [8].
high
Cbfrom
and
the
chroma
is
based
on the
observation
that
high
Cb
and
the
chroma
is
based
the
observation
that
highSince
Cb
and
strils,
(also
other
significant
features)
usually
low
Creyes
values
can
beon
found
around face
the eyes
[8].
nolow
Cr
values
can
be
found
around
the
eyes
[8].
Since
nolow
Creyes
values
can other
be found
around face
the eyes
[8]. Since
nostrils,
(also
significant
features)
usually
strils,
eyes
(also
other
significant
face
features)
usually
strils, eyes (also other significant face features) usually
form dark or light blobs on the face image, the second
probability
map
is constructed
fromface
luminance
component
form dark or
light
blobs on the
image, the
second
form dark
dark or
or light
light blobs
blobs on
on the
the face
face image,
image, the
the second
second
form
using
blob
detection
algorithm
based
on
a
scale-space
reprobability map is constructed from luminance component
probability
map
is
constructed
from
luminance
component
probability
map
is
constructed
from
luminance
component
presentation
[11]
of
the
image
signal.
using blob detection algorithm based on a scale-space reusing blob
blob detection
detection algorithm
algorithm based
based on aa scale-space
scale-space rereusing
After eye
andof nostril
localization
stage the following
presentation
[11]
the image
signal. on
presentation
[11]
of
the
image
signal.
presentation
[11]
of nostril
the image
signal. stage
information
available
for
calibration
and the
initialization
After eyeis
and
localization
following
After eye
eye and
and nostril
nostril localization
localization stage
stage the
the following
following
After
HCI
application:
blink
event
and
location,
eye
and nostril
information is available for calibration and initialization
information
is
available
for
calibration
and
initialization
information
available
forapproximate
calibration
and
position,
as iswell
as face
boundary;
head
HCI application:
blink
event
and location,
eyeinitialization
and nostril
HCI application:
application: blink
blink event
event and
and location,
location, eye
eye and
and nostril
nostril
HCI
movement
(or
significant
lighting
change)
signal.
position, as well as face approximate boundary; head
position, as
as well
well as
as face
face approximate
approximate boundary;
boundary; head
head
position,
Gathered
can be used
to learn
movement
(orinformation
significant
lighting
change)
signal.visual cues
movement
(or
significant
lighting
change)
signal.
movement
(orinformation
significant
lighting
change)
signal.visual
andGathered
initialize
essential
parameters
of the
action cues
reccan be used
to facial
learn
Gathered information
information can
can be
be used
used to
to learn
learn visual
visual cues
cues
Gathered
ognition
and
feature
tracking
algorithms
–
for
example:
and initialize essential parameters of the facial action recand initialize
initialize essential
essential parameters
parameters of
of the
the facial
facial action
action recrecand
skin-color
model
used tracking
in CAMSHIFT
tracker
correlaognition
and
feature
algorithms
– fororexample:
ognition
and
feature
tracking
algorithms
–
for
example:
ognition
and
feature
algorithms
fororto
example:
tion
tracker
templates.
Also,
blinks
cantracker
be –used
recaliskin-color
model
used tracking
in
CAMSHIFT
correlaskin-color model
model used
used in
in CAMSHIFT
CAMSHIFT tracker
tracker or
or correlacorrelaskin-color
brate
system
during
its
execution.
tion tracker templates. Also, blinks can be used to recalition tracker templates.
templates. Also,
Also, blinks
blinks can
can be
be used
used to
to recalirecalition
bratetracker
system during its execution.
brate
system
during
its
execution.
brate system during its execution.
4. Preliminary experimental results
4.
Preliminary
experimental
results
Proposed
algorithm experimental
has been modeledresults
and tested
4.
4. Preliminary
Preliminary
experimental
results
on
MATLAB/Simulink
platform
running
on
Windows-based
Proposed algorithm has been modeled and tested on
Proposed
algorithm
has
modeled
and
on
Proposed
algorithm
hasis been
been
modeled
and tested
tested
PC.
Because,
MATLAB
not running
suitable
to perform
tests on
in
MATLAB/Simulink
platform
on
Windows-based
MATLAB/Simulink
platform
running
on
Windows-based
MATLAB/Simulink
platform
running
on
Windows-based
real-time,
part
of
the
algorithm
also
has
been
integrated
PC. Because, MATLAB is not suitable to perform tests in
PC.
Because,
MATLAB
is
suitable
to
tests
in
PC.
Because,
MATLAB
is not
notSimulink
suitable
to perform
perform
tests
in
with
C++part
framework
using
Coder
offers
real-time,
of the algorithm
also has
beenwhich
integrated
real-time,
part
of
the
algorithm
also
has
been
integrated
real-time,
part
of
the
algorithm
also
has
been
integrated
code
generation
capabilities.
This
enables
possibility
of
with C++ framework using Simulink Coder which offers
with
C++
framework
using
Simulink
Coder
which
offers
with
C++
framework
using
Simulink
Coder
which
offers
additional
evaluation
of
algorithm
performance
with
user
code generation capabilities. This enables possibility of
code
generation
capabilities.
This
enables
of
code
generation
capabilities.
This performance
enables possibility
possibility
of
interaction.
additional
evaluation
of algorithm
with user
additional
evaluation
of
algorithm
performance
with
user
additional
evaluation
of
algorithm
performance
with
user
To
our
knowledge,
most
blink
detection
algorithms
[6,
interaction.
interaction.
interaction.
10, To
13, our
16] knowledge,
are evaluated
ondetection
image sequences
from
mostonly
blink
algorithms
[6,
To
our
knowledge,
most
blink
detection
algorithms
[6,
To
our
knowledge,
most
blink
detection
algorithms
[6,
one
camera.
Our
main
requirement
was
that
algorithm
de10, 13, 16] are evaluated only on image sequences from
10,
13,
16]
are
evaluated
only
on
image
sequences
from
10,
13,
16]
are
evaluated
only
on
image
sequences
from
signed
for
system
calibration
should
work
well
under
varione camera. Our main requirement was that algorithm deone
camera.
Our
main
requirement
was that
algorithm
deone
camera.
Ourwithout
main
requirement
that
algorithm
deous
conditions.
the need
ofwas
adjusting
its
paramesigned
for system
calibration
should
work
well
under
varisigned
for
system
calibration
should
work
well
under
varisigned
for
system
calibration
should
work
well
under
variters.
Therefore,
test
video
sequences
have
been
taken
by
ous conditions. without the need of adjusting its parameous
conditions.
without
the
need
of
adjusting
its
parameous
conditions.
without
the
need
of
adjusting
its
paramedifferent
cameras
in
various
lighting
conditions
and
backters. Therefore, test video sequences have been taken by
ters.
Therefore,
test video
have
taken
by
ters.
Therefore,
video sequences
sequences
have been
been and
taken
by
ground
complexity.
different
camerastest
in various
lighting conditions
backdifferent
cameras
in
various
lighting
conditions
and
backdifferent
cameras
in
various
lighting
conditions
and
backSignals
obtained
from
algorithm
(blink
events,
eye
poground complexity.
ground
complexity.
ground
complexity.
sition,
direction
of head
etc.) are
suitable
to
Signals
obtained
from movement,
algorithm (blink
events,
eye poSignals
obtained
from
algorithm
(blink
events,
eye
poSignals
obtained
from
algorithm
(blink
events,
eye
pocontrol
applications.
For
example
–
blinks
can
emulate
sition, direction of head movement, etc.) are suitable to
sition,
direction
of head
movement,
etc.) are
suitable
to
sition,
direction
head
movement,
arecan
suitable
to
mouse
andofhead
movement
be used
to emulate
control
control clicks,
applications.
For
example can
– etc.)
blinks
control
applications.
For
example
–
blinks
can
emulate
control
applications.
For
example
–
blinks
can
emulate
cursor.
mouse clicks, and head movement can be used to control
mouse
clicks, and head movement can be used to control
mouse
cursor. clicks, and head movement can be used to control
cursor.
cursor.
5. Conclusion
5.
Conclusion
Preliminary
experimental results show that proposed algo5.
5. Conclusion
Conclusion
rithm
can beexperimental
used to automatic
adaptation
and calibraPreliminary
results show
that proposed
algoPreliminary
experimental
results
show
that
proposed
algoPreliminary
experimental
results
show
that
proposed
algotion
of
HCI
system.
Adaptation
of
the
machine
to calibrahuman
rithm can be used to automatic adaptation and
rithm
can
be
used
to
automatic
adaptation
and
calibrarithm
can
be
used
to
automatic
adaptation
and
calibrarather
than
vice
versa
is
essential
shift
toward
a
humantion of HCI system. Adaptation of the machine to human
tion
of
HCI
system.
Adaptation
of
the
machine
to
human
tion
of than
HCI
system.
Adaptation
of shift
the
machine
to
human
centered
interaction
architecture,
which
is observed
in
rather
vice versa
is essential
toward
a humanrather
than
vice
versa
is
essential
shift
toward
a
humanrather
than
vice
versa
is
essential
shift
toward
a
humanHuman-Computer-Interaction
domain
[12].
Although,
recentered interaction architecture, which is observed in
centered
interaction
architecture,
which
is observed
in
centered
interactionthere
architecture,
which
observed
in
sults
are promising
are several
issues
should rebe
Human-Computer-Interaction
domain
[12].isthat
Although,
Human-Computer-Interaction
domain
[12].
Although,
reHuman-Computer-Interaction
domain
[12].
Although,
readdressed
in
future
work.
More
tests
and
evaluation
in
sults are promising there are several issues that should be
sults
are
promising
there
are several
issues
that should
be
sults
arescenarios
promising
there
several
issues
shouldcan
be
various
(especially
use
by
disabled
people)
addressed
in future
work.areMore
tests
andthat
evaluation
in
addressed
in
future
work.
More
tests
and
evaluation
in
addressed
in
future
work.
More
tests
and
evaluation
in
help
improve
algorithm
efficiency.
various scenarios (especially use by disabled people) can
various
scenarios
(especially
use
by
disabled
people)
can
various
scenarios
(especially
use
by
disabled
people)
can
help improve algorithm efficiency.
help
References
help improve
improve algorithm
algorithm efficiency.
efficiency.
References
1.
Augustyniak P., Mikrut Z.: Complete scanpaths analReferences
References
1.
1.
1.
2.
2.
2.
2.
ysis toolbox., EMBS'06,
5137–5140.
Augustyniak
P., Mikrut 2006,
Z.: Complete
scanpaths analAugustyniak
P.,
Mikrut
Z.:
scanpaths analAugustyniak
P., J.,
Mikrut
Z.: Complete
Complete
analBetke
M., Gips
Fleming
P.: The scanpaths
camera mouse:
ysis
toolbox.,
EMBS'06,
2006,
5137–5140.
ysis
toolbox.,
EMBS'06,
2006,
5137–5140.
ysis
toolbox.,
EMBS'06,
2006,
5137–5140.
Visual
tracking
of
features
provide
computer
Betke
M.,
Gips
J., body
Fleming
P.: to
The
camera
mouse:
Betke
M., Gips
J.,
Fleming
P.:
The
camera
mouse:
Betke
Gips of
J.,
Fleming
P.: to
The
camera
mouse:
access M.,
for
people
with
severe
disabilities.
IEEE
Trans.
Visual
tracking
body
features
provide
computer
Visual
tracking
of
body
features
to
provide
computer
Visual
tracking
body
features
to provide
computer
access for
peopleofwith
severe
disabilities.
IEEE
Trans.
access
access for
for people
people with
with severe
severe disabilities.
disabilities. IEEE
IEEE Trans.
Trans.
12/2011 Pomiary automatyka Robotyka
87
nauka
3.
3.
4.
4.
5.
5.
6.
6.
7.
7.
8.
8.
9.
9.
10.
10.
11.
11.
12.
12.
13.
13.
14.
14.
15.
15.
88
on Neural Systems and Rehabilitation Eng., Mar
on Neural
Systems and Rehabilitation Eng., Mar
2002,
10(1):1–10.
2002, 10(1):1–10.
Bobick
A.F., Davis J.W.: Action Recogn. Using
Bobick A.F.,
Davis 1997.
J.W.: Action Recogn. Using
Temporal
Templates,
TemporalG.R.:
Templates,
1997.Vision Face Tracking For
Bradski
Computer
Bradski
Computer
Face
Tracking
For
Use
in a G.R.:
Perceptual
User Vision
Interface.,
„Intel
TechnoloUseJournal”,
in a Perceptual
User Interface., „Intel Technology
(Q2), 1998.
gy Journal”,
(Q2), 1998.
Broniec
A.: Motion
and motion intention representaBroniec
A.: Motion
and possibilities
motion intention
representation
in eeg
signal and
of application
in
tion in
eeg signal
and possibilities
in
man
machine
interface.,
AGH Univ.of ofapplication
Science and
man machine
interface.,
AGHthesis,
Univ. 2007.
of Science and
Technology
Krakow,
Master's
Technology
Krakow,
Master's
2007. and blink
Chau
M., Betke
M.: Real
timethesis,
eye tracking
Chau M., with
BetkeUSB
M.: cameras.,
Real time Technical
eye tracking
and blink
detection
report,
Bosdetection
with Computer
USB cameras.,
Technical
report, Boston
University
Science,
Apr 2005.
ton University
Computer
Apr P.:
2005.
Evans
D.G., Drew
R., Science,
Blenkhorn
Controlling
Evans D.G.,
R., Blenkhorn
Controlling
mouse
pointerDrew
position
using an P.:
infrared
headmouse pointer
position
infrared
operated
joystick.
IEEE using
Trans,anDec
2000, head8(1):
operated joystick. IEEE Trans, Dec 2000, 8(1):
107–117.
107–117.
Hsu R-L., Abdel-Mottaleb M., Jain A.K.: Face detecHsu R-L.,
Abdel-Mottaleb
M., Jain
A.K.:
Face detection
in color
images. „IEEE
Trans.
Pattern
Anal.
tion inIntell.”,
color images.
„IEEE Trans. Pattern Anal.
Mach.
2002, 24(5):696–706.
Mach. S.,
Intell.”,
2002, S.J.,
24(5):696–706.
Gong
McKenna
Psarrou A.: Dynamic ViGong From
S., McKenna
Psarrou
A.: Dynamic
Vision:
Images S.J.,
to Face
Recognition.
Imperial
sion: From
Face
Recognition. Imperial
College
Press,Images
London,toUK,
2000.
College Press,
London,
UK, 2000.
Lalonde
M., Byrns
D., Gagnon
L., Teasdale N., LauLalonde M.,
D., Gagnon
L.,detection
Teasdale with
N., Laurendeau
D.: Byrns
Real-time
eye blink
gpurendeau
Real-time
eye blink
based
siftD.:
tracking.,
In CRV,
2007,detection
481–487. with gpubased sift tracking.,
In CRV,theory:
2007, 481–487.
Lindeberg
T.: Scale-space
A basic tool for
Lindeberg structures
T.: Scale-space
theory:scales,
A basic
tool for
analysing
at different
„Journal
of
analysingStatistics”,
structures1994,
at different
scales, „Journal of
Applied
21(2):224–270.
AppliedCh.L.,
Statistics”,
1994,
21(2):224–270.
Lisetti
Schiano
D.J.:
Automatic facial expresLisettiinterpretation:
Ch.L., Schiano
D.J.: human-computer
Automatic facial interacexpression
Where
sion interpretation:
Whereand
human-computer
interaction,
artiffcial intelligence
cognitive science
intertion, artiffcial
intelligence
and cognitive
scienceIssue),
intersect.,
„Pragmatics
and Cognition
(Special
sect., 8(1):185–235.
„Pragmatics and Cognition (Special Issue),
2000,
2000,
8(1):185–235.
Lombardi
J., Betke M.: A self-initializing eyebrow
Lombardi
Betke
M.:emulation,
A self-initializing
eyebrow
tracker for J.,
binary
switch
2002.
tracker
for
binary
switch
emulation,
2002.
Ober J., Hajda J., Loska J.: Applications of eye
Ober
J., Hajda
J., system
Loska J.:
Applications
of and
eye
movement
measuring
ober2
to medicine
movement
measuring
system
ober2
to
medicine
and
technology. SPIE, Apr 1997, 3061:327–336.
technology.
Aprnostril
1997, 3061:327–336.
Przybylo J.:SPIE,
Eye and
localization for automatPrzybylo
J.:
Eye
and
nostril
localization
for automatic calibration of facial action
recognition
system.
ic calibration of facial action recognition system.
Pomiary automatyka Robotyka 12/2011
Computer Recognition Systems 3, Springer-Verlag,
Computer
Recognition
Systems 3, Springer-Verlag,
May
2009, vol.
57/2009, 53–61.
May
2009,
vol.
57/2009,
53–61.
16. Zhiwei Zhu: Real-time eye detection and tracking un16. der
Zhiwei
Zhu: Real-time
eye detection
and Press,
tracking2002,
unvarious
light conditions.
ACM
der
various
light
conditions.
ACM
Press,
2002,
139–144.
139–144.
Automatyczna adaptacja
Automatyczna adaptacja
interfejsu człowiek-komputer
interfejsu człowiek-komputer
Streszczenie: Interakcja człowiek-komputer (ang. humanStreszczenie: Interakcja człowiek-komputer (ang. humancomputer interaction - HCI) to nowa dziedzina nauki ukierunkocomputer interaction - HCI) to nowa dziedzina nauki ukierunkowana na rozwój naturalnych i intuicyjnych sposobów korzystania
wana na rozwój naturalnych i intuicyjnych sposobów korzystania
z komputerów i rządzeń. Wśród wielu stosowanych rozwiązań
z komputerów i rządzeń. Wśród wielu stosowanych rozwiązań
znajdują się systemy potrafiące śledzić cechy i rozpoznawać miznajdują się systemy potrafiące śledzić cechy i rozpoznawać mimikę twarzy, wykorzystujące algorytmy przetwarzania i analizy
mikę twarzy, wykorzystujące algorytmy przetwarzania i analizy
obrazów. Rozpoznawanie mimiki jest trudnym zadaniem ze
obrazów. Rozpoznawanie mimiki jest trudnym zadaniem ze
względu na istnienie wielu źródeł zmienności w wyglądzie twarzy.
względu na istnienie wielu źródeł zmienności w wyglądzie twarzy.
Bardzo często skutkuje to koniecznością ręcznej i niewygodnej
Bardzo często skutkuje to koniecznością ręcznej i niewygodnej
kalibracji systemu. Dlatego, też zdaniem autora, kluczową sprakalibracji systemu. Dlatego, też zdaniem autora, kluczową sprawą jest takie konstruowanie algorytmów, aby system adaptował
wą jest takie konstruowanie algorytmów, aby system adaptował
się automatycznie do człowieka, a nie odwrotnie.
się automatycznie do człowieka, a nie odwrotnie.
Słowa kluczowe: interakcja człowiek-komputer; przetwarzanie
Słowa kluczowe: interakcja człowiek-komputer; przetwarzanie
i analiza obrazów, adaptacja i kalibracja
i analiza obrazów, adaptacja i kalibracja
Jaromir Przybyło, PhD
Jaromir
Przybyło,
Jaromir Przybylo,
bornPhD
in 1975 in Kra-
Jaromir
Przybylo,
in 1975with
in Krakow, Poland.
He born
graduated
an
kow,
Poland.
He
graduated
with
an
outstanding proficiency, with MSc,
outstanding
Eng. degree
Eng.
degree
of Electrical
proficiency,
MSc,
in
2000 from with
the Faculty
in
2000
from
the
Faculty
Engineering, Automatics,
of
ElectricalScience
Engineering,
Computer
and Automatics,
Electronics,
Computer
Science
and(with
Electronics,
AGH-UST. PhD degree
honours)
AGH-UST.
(with honours)
received in PhD
2008 degree
in Computer
Science:
received
in
2008
in
Computer
Science:
Biocybernetics and Biomedical
Engineering. Since 1999 he
Biocybernetics
and
Biomedical
1999 and
he
works at the AGH-UST Institute of Engineering.
Automatics asSince
a research
works
at assistant,
the AGH-UST
as a research and
teaching
sinceInstitute
2009 asofanAutomatics
assistant-professor.
teaching
assistant,
since
2009
as
an
assistant-professor.
e-mail: [email protected]
e-mail: [email protected]