A group of researchers from the University of Granada (UGR) has developed Inmamusys, a software program that can create music in response to emotions that arise in the listener. By using artificial intelligence (AI) techniques, the program enables original, copyright-free and emotion-inspiring music to be played continuously.
UGR researchers Miguel Delgado, Waldo Fajardo and Miguel Molina decided to design a software program that would enable a person who knew nothing about composition to create music. The system they devised, using AI, is called Inmamusys, an acronym for Intelligent Multiagent Music System, and is able to compose and play music in real time.
If successful, this prototype, which has been described recently in the journal Expert Systems with Applications, looks likely to bring about great changes in terms of the intrusive and repetitive canned music played in public places.
Miguel Molina, lead author of the study, says that while the repertoire of such canned music is very limited, the new invention can be used to create a pleasant, non-repetitive musical environment for anyone who has to be within earshot throughout the day.
Everyone's ears have suffered the effects of repetitively-played canned music, be it in workplaces, hospital environments or during phone calls made to directory inquiries numbers. On this basis, the research team decided that it would be "very interesting to design and build an intelligent system able to generate music automatically, ensuring the correct degree of emotiveness (in order to manage the environment created) and originality (guaranteeing that the tunes composed are not repeated, and are original and endless)."
Inmamusys has the necessary knowledge to compose emotive music through the use of AI techniques. In designing and developing the system, the researchers worked on the abstract representation of the concepts necessary to deal with emotions and feelings. To achieve this, Molina says, "we designed a modular system that includes, among other things, a two-level multiagent architecture."
A survey was used to evaluate the system, with the results showing that users are able to identify the type of music composed by the computer. A person with no musical knowledge whatsoever can use this artificial musical composer, because the user need do nothing more than decide on the type of music."
Beneath the system's ease of use, Miguel Molina reveals that a complex framework is at work to allow the computer to imitate a feature as human as creativity. Aside from creativity, music also requires specific knowledge.
According to Molina, this "is usually something done by human beings, although they do not understand how they do it. In reality, there are numerous processes involved in the creation of music and, unfortunately, we still do not understand many of them. Others are so complex that we cannot analyse them, despite the enormous power of current computing tools. Nowadays, thanks to the advances made in computer sciences, there are areas of research -- such as artificial intelligence -- that seek to reproduce human behaviour. One of the most difficult facets of all to reproduce is creativity."
Farewell to copyright payments
Commercial development of this prototype will not only change the way in which research is carried out into the relationship between computers and emotions, the means of interacting with music and structures by which music is composed in the future. It will also serve, say the study's authors, to reduce costs.
According to the researchers, "music is highly present in our leisure and working environments, and a large number of the places we visit have canned music systems. Playing these pieces of music involves copyright payments. Our system will make these music copyright payments a thing of the past."
0 comments:
Post a Comment