If that sentence only raises questions in your mind, you can be forgiven!
NFT stands for non fungible token, and its a term used to represent digital assets like videos and images that have been put onto the blockchain to record ownership. It should revolutionise the art world and allow artists to be properly credited for their work. The assets in this exhibition are owned by one owner, 33NFT, who has over 1000 NFT’s.
Here is an impression of what you can see at the exhibition:
Why go to the exhibition?
The art on display is beautiful and futuristic
What you’re looking at is stunning digital art on a large display screen
The topics of the art are thought provoking
Witness the dawn of a new age in artists royalties and rights
Learn something about NFT’s, blockchain technology and be a part of the dawn of a new age.
The complementary exhibitions of immersive digital and traditional art in the rest of the museum are breathtaking
Please share this blog post and subscribe so that I can continue to create approachable tech content!
Short and sweet, the low-down on the Metaverse without the hype.
What is the metaverse?
It is an immersive and engaging digital experience.
The metaverse adds a 3-dimensional component to standard web experiences, so you can navigate in 3 dimensions, thereby making it a more natural experience.
You access the metaverse using the internet and Virtual Reality headsets or Augmented Reality on a phone or tablet and sometimes through a browser, an app.
Spaces in the metaverse are created by companies, like digital twins of their office buildings or of museums and towns.
Regular people can also create their own spaces on websites like Spatial.IO or Decentraland.
One is generally represented by a 3d avatar. You can create one avatar that you reuse in multiple Metaverse applications using ReadyPlayerMe. Your avatar can perform actions that are supported in whatever space you are in.
What is the potential of the metaverse?
Companies can host online meetings in a more immersive style using 3d virtual reality technologies
Artists can engage with people online by hosting events for more immersive online viewing.
Digital assets like NTF’s can be added to enhance and enrich the experience. This increases the e-commerce opportunity of the metaverse.
Online gaming in shared 3d worlds where users can generate their own content and participate in a sophisticated economy like Roblox
Problems with the Metaverse
You need a device like a headset, a smartphone or a PC to participate.
The metaverse is not one cohesive universe, one set of linked digital experiences, but a concept of spatial experiences creating a new kind of digital reality for participants.
The metaverse depends on attractive content created by companies and individual contributors, essentially sticky content to attract people
Interaction is still fairly clunky and doesn’t yet feel natural
There is no shared reality, practically speaking, except virtual shared experiences via a device
Web 3.0 is the semantic web, a format of content with metadata to allow it to be machine interpretable. It might be used for the metaverse in the future.
Blockchain and the metaverse
Is the blockchain needed for the metaverse? Many definitions of the metaverse include a reference to blockchain technology as a basis for decentralisation and commerce. I would say it depends on your definition- from my perspective blockchain seems to not be needed.
NFT stands for non fungible tokens, a fancy way of saying tokens on the blockchain that cannot be exchanged as money. NFT has come to be synonymous with digital assets like images, digital artworks and digital branded accessories for digital avatars.
The Roblox phenomenon
Roblox is a massive online gaming platform where users create experiences for each other and play each other’s games and participate in a blockchain economy together. Its an example of a Metaverse application, a shared virtual universe.
Roblox is used predominantly by people under the age of 16.
How to get started in the Metaverse?
Go to one of the sites listed and create an avatar:
You will go through a tutorial on how to navigate in that space using keys like WASD and the mouse to change camera angle. Try to get accustomed to the interaction experience and navigate to different locations.
This summer vacation while visiting my sister in the UK and having a lovely visit with her neighbour Janet, I got the opportunity to experience the with a Joy For All robot kitty firsthand.
This robotic pet was created to help people with Dementia and Alzheimers enjoy the soothing experience of having a pet without the great responsibility that it incurs.
Janet bought her cat simply to have something cuddly and lifelike around without having to take care of a pet – something a much more engaging and lovable than a stuffed toy (no offence to stuffed toys!).
This cat was quite enjoyable to interact with, it purrs, meows, is very soft with flexible limbs and movable ears – it even has its own brush.
It does a few tricks like rolling over and raising its paws! Overall it’s a fun and interesting experience and it reminded me that human beings need companionship and comfort and sometimes we can get a little help from technology.
Janet also happened to have this custom designed leopard painted on her shed, complete with movable tail. The design of the mechanism was simple and effective, with a coat hanger and rotating wooden disk. The eyes are led’s and the entire thing can be turned off and on using a Philips Hue plug. It made me wonder – why do I consider this a robot? Although it’s not capable of changing its behaviour based on input, I categorise it somewhere along the lines of Animatronics.
Speaking of animatronics, here is a video of animatronic dinosaurs from our recent visit to the Leiden Naturalis museum and some dinosaur fossils!
Well, thats all for now, I wish for you to also spot many robots in the wild – happy hunting!
Our Nao recently stopped working due to a completely flat battery.
My boyfriend Renze and I have had it since 2016 and it’s no longer serviced by the manufacturer once the 4 year warranty is up.
Investing in social robots has proven to be a bit tricky. Companies go bankrupt or are acquired (e.g. our Anki Vector robot) and then strategies change, leaving you with some expensive hardware at worst gathering dust on the shelf and at best needing additional investment to keep it working. You might have the hardware but an important part of the package is often cloud based software or development API’s that won’t be maintained and updated indefinitely.
We’ve not used our Nao for quite a while due to other projects but since I recently started a course on robotics which used the Nao, I thought it would be good to dust it off again, only to find the battery completely flat.
We’ve sent our Nao to Softbank in France to be serviced once before, due to its somewhat fragile fingers breaking.
This time we’ve been lucky and it’s only the battery that’s flat and not something more serious wrong with it.
We found a blog post detailing how to replace the dead cells in Nao’s battery pack:
We first removed the battery pack by unscrewing the hatch, then we tested the cells and found them depleted.
We opened the battery pack and it seemed possible to swap the cells and reconnect the circuit board to them.
Next we had to order new cells online (these are standard 18650 Lithium-ion cells) and buy a mini spot welder to weld the zinc connectors to the cells.
We soldered the connectors onto the circuit board and taped the battery pack together with heat resistant tape.
We did some careful testing by charging the battery with the official charger and then installing it back into Nao. Finally our Nao is back in working order! And we’ve gained some confidence and added some new technical skills (and equipment) to our toolkit.
Challenges and opportunities that human perception provides to robotic design
This is an essay that I wrote for a course in 2015.
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley
In this essay I will describe some aspects of human perception and cognition which may be considered to have an impact on human-robot interaction. I would like to discuss the findings on the Uncanny Valley as described in the provided article, Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (Mathur and Reichling, 2015) and the parallels we might draw to other aspects of human-robot interaction. I will consider the application of theory-of-mind to the uncanny valley concept. Furthermore, I would like to discuss the possibility of robots making use of human cognitive and perceptive limitations and biases to pursue their goals more effectively.
Perception is the process that makes sense of the information we receive from our senses (page 30, 2014, Gilhooly et al).. Illusions are what happens when we misinterpret the signals from our senses by applying certain heuristics and rules for interpretation that don’t always hold true. For example, the illusion of a line with two outward pointing arrows at each end that looks shorter than the line with two inward pointing arrows at each end, although they are both the same length.
Social Perception describes the way that human biology and physiology constrain and influence our social interaction (page 64, 2014, Gilhooly et al). The physical limitations of all the aspects of the human body that produce communicative signals like our facial structures and expressive capabilities, vocal structures and gestural capabilities results in a limited range of communicative input that we can perceive and process. Faces provide valuable information on emotion, age, gender, attractiveness and identity. We are even capable of identifying faces under challenging conditions including low light and at long range. Very specific areas and networks in the brain are engaged in facial recognition – thus we are highly and specifically adapted for it. The composition of features on the face is as important for recognition as the appearance of the features themselves. Social perception leads to social and emotional responses. These inform us about the internal states, motivations and emotions of others.
Biases in human decision making and reasoning
Decision making is the process of choosing between options. (page 325, 2014,Gilhooly et al) There are several theories which model biases that human beings show in our basic decision making processes, including:
Prospect theory, in which we try to avoid losses but ignore gains that might come with risk.
Multi-attribute decision making, which simplifies complex factors in the decision using elimination-by-aspects and satisficing.
Probabilities based decision making, which uses heuristics like availability, representativeness and ignoring of base rates.
Emotions-based decision making, using System 1 instead of reasoning and logic. Reasoning is the process of deriving information from what we already know about a certain topic. We cover two types of reasoning in the text – deductive reasoning which involves drawing logical conclusions from what we already know and applying these rules to other specific information to get new information, and inductive reasoning which involves inferring generalizations from our specific observations.
Here are some biases we have in our reasoning faculties:
Illusory inferences, in which we make the error of assuming that something that is likely true, is definitely true
Belief bias, in which what we believe to be true overcomes our logic and reasoning about a subject
Matching bias, in which to simplify, we make selections based on matching features we have heard in the problem with those which appear in the possible solutions, instead of using logic to solve the problem
Confirmation bias, in which we tend to look for evidence to confirm our hypotheses, instead of negating evidence.
The Uncanny Valley
The essence of the Uncanny Valley theory, first described by Masahiro Mori in his 1970 paper on the subject, is that human beings dislike imperfection in human-likeness. Mori’s paper describes the effects of robot appearance and robot movement on our affinity to the robot (1970, Mori). In this course we were provided an article. ‘Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’ (2015, Mathur and Reichling), which deals with the existence of an Uncanny Valley effect in the relationship between robot likeability and trustworthiness versus how mechanical or human the robot’s was appearance was. The study done used several pictures of robots with varying levels of Mechano-humanness. These pictures were obtained on the internet. The images were then rated by study participants for different characteristics. It was discovered that mechanical and human appearance could be treated as one continuous variable.
Perceived emotion did not influence the relationship between likeability and Mechano- humanness, however, faces showing more positive emotion appeared to be more likeable.
Very mechanical looking robots were seen to be unlikeable, and likeability increased as the robot faces became less mechanical in appearance. Likeability declined closer to the confusing boundary between mechanical and human-looking robots. Likeability increased dramatically as the robots began to look much more human.
Another experiment was conducted relating to trust and a similar pattern was evident for trust versus mechano-humanness. A very mechanical robot was untrustworthy, but this increased as the robot appearance became less mechanical and then decreased as the robot approached the boundary between mechanical and human looking, with an increase in trust for a very human-looking robot. Robots which showed more positive emotion were seen to be more trustworthy.
Theory of mind
The article describes that we trust others whom we believe to have interests that encapsulate our own. Subjects in the study might have tried to to assess the robot’s interests, possibly applying the theory of mind to the robot, attributing a stance implying thoughts beliefs and desires to them. Or the robots were treated as an inanimate agent of the robot maker. The article continues as follows about the nature of human-robot interaction:
“humans appear to infer trustworthiness from affective cues (subtle facial expressions) known to govern human–human social judgments. These observations help locate the study of human-android robot interaction squarely in the sphere of human social psychology rather than solely in the traditional disciplines of human factors or human–machine interaction”.
Katsyri et al (2015) suggests that the Uncanny Valley effect arises at the boundary between perceptual categories – between non-human and human categories. This confusion is measured experimentally as the additional time it takes to categorize a stimulus. The experimental results in the study confirmed the theory that very human or very mechanical looking faces were more quickly classified than more ambiguous ones. The maximum rating time occurred at the point of minimum likeability. Results also indicated that very human robots were more likeable, but suggest that small faults in their humanness would push them back down the likeability scale into the Uncanny Valley. The article speculates that Uncanny Valley effects on trust may be influenced by robots’ individual characteristics like sex, facial form and perceived emotion.
Application of Uncanny Valley to Theory of Mind
The Uncanny Valley theory in the article selected is applied to the relationship between robot appearance and likeability or trust. It could also be valid to apply the Uncanny Valley model to the theory of mind that human social partners would develop when interacting with robots.
Lack of human reasoning and decision-making biases
Examples of robot-specific cognition characteristics can include the simple lack of the human reasoning and decision making biases we have discussed in the text. For instance, if a robot had enough resources and processing capacity and smart programming, instead of exhibiting human biases in decision making, like applying prospect theory or using emotions-based system 1 decision making, it might apply logic and clear reasoning when making decisions.
Additional robotic cognitive biases and limitations
Specific cognitive biases could occur due to a robot’s hardware and software limitations. Examples of limitations in cognition and perception for robots could include not being able to listen and speak at the same time, slow processing times causing a delay in responses, answering exactly the same answer to the same question every time without originality or creativity, or not understanding humour or metaphors. Such limitations can create a disconnect and serve to remind the human social partners that they are talking to a machine.
Preference for different decision making methods
If a robot does not make use of a typically human decision making method when making a decision, it could serve as a disconnect for the observing human being. For example, in the textbook, multi attribute utility theory is discussed as a way of comparing complex decisions and might be considered a typically human response to the task of choosing where to buy a house. A less typically human approach might be to choose based on a statistical analysis, e.g. similar robots live in houses like this, therefore I should choose this house.
Navigating up and down the walls of the Uncanny Valley
In the article we see that very human looking robots were more likeable but that small defects in their humanness could push them down the valley into unlikeability. Robots that were closer to the boundary between human and mechanical looking could become less likeable when their appearance became uncharacteristically human.
This concept can also be related to the uncanny valley of theory of mind. Imagine a robot with quite mechanical thought patterns. This robot could become very unlikeable by unexpectedly exhibiting thought patterns which are too human. This would be an indirect way of applying the idea of the Uncanny Valley. Instead of a direct effect of visually observing and feeling, in this case we would be considering the accumulated effect of small pieces of information and patterns used to classify the robot over a presumably longer period of time.
Use of cognitive limitations when robots interact with humans
How should robots make use of this knowledge we have about human cognition? Let us imagine that robots have an explicit model of human thinking, and are able to predict the kind of biases we most probably would exhibit in decision making. A robot could use this information to be more successful in serving their goal. For instance, if a robot is a salesperson, it could make use of several of the decision making biases listed above to be more effective at selling e.g using the matching bias, the robot could present a story about some problem a human would have and present his desired solution in the story. When the time comes to select a solution, matching bias can cause a person to select the robots offered solution or product, instead of using reasoning and logic to select a solution.
Human perceptive limitations
Robots could also make use of human perceptive limitations, for instance knowledge of the human visual system could allow a robot to perform an action faster than a human can perceive it. Micro-expressions could be used in this way to create a better (or worse) rapport with human beings. Visual or auditory illusions could also be manufactured with this knowledge.
In this essay, I was provoked by the concept of applying our cognitive processes and limitations to human robot interaction. I posed the question of how robots can make use of the knowledge of human cognition to better reach their goals – this includes the knowledge of human cognition limitations. I also reason that the uncanny valley effect could be applied to more than the appearance of robots – it could be applied to a mental model of robot psychology that human social partners might build. In this case, there may be jarring effects further into the uncanny valley of the robot thinking patterns suddenly seem to be incongruous – either too mechanical or too human. In these ways our understanding of human perception and decision making can be applied to human robot interaction. Furthermore, I postulate that, just as human beings have cognitive peculiarities due to evolution and physiological and psychological reasons, so too do robots have cognitive peculiarities based on their hardware and software. I have tried to show my understanding of the text by abstracting the essence of the course material and reflecting it onto the mechanical mind. I also reflected on the cognitive processes of human beings from the point of view of robots and robot cognitive processes from the point of view of humans. Thus the theories of human cognition can be further applied to draw parallels with robot cognition.
Mathur, M. and Reichling, B. (2015). “Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley.“ Cognition 146.
Mori, M. (1970). “The Uncanny Valley.” Energy Volume 7 No. 4.
Gilhooly, K., Lyddy, F., Pollick, F.(2014) Cognitive Psychology. London: McGraw Hill Publishing.