What are the digital competencies needed in the future? Our head of department has challenged me to talk about this topic at an internal seminar today. Here is a summary of what I said.
Competencies vs skills
First, I think it is crucial to separate competencies from skills. The latter relates to how you do something. There has been much focus on teaching skills, mainly teaching people how to use various software or hardware. This is not necessarily bad, but it is not the most productive thing in higher education, in my opinion. Developing competency goes beyond learning new skills.
Some argue that skill is only one of three parts of competency, with knowledge and abilities being the others:
Skills + Knowledge + Abilities = Competencies
So a skill can be seen as part of competency, but it is not the same. This is particularly important in higher education, where the aim is to train students for life-long careers. As university teachers, we need to develop our students’ competencies, not only their skills.
Digital vs technological competency
Another misunderstanding is that “digital” and “technology” are synonyms, and they are not. Technologies can be either digital or analogue (or a combination). Think of “computers”. The word originated from humans (often women) that manually computed advanced calculations. Human computers were eventually replaced by mechanical machine computers, while today we mainly find digital computers. Interestingly, there is a growing amount of research on analogue computers again.
I often argue that traditional music notation is a digital representation. Notes such as “C”, “D”, and “E” are symbolic representations of a discrete nature, and these digital notes may be transformed into analogue tones once performed.
One often talks about the differences between acoustic and digital instruments. This is a division I criticise in my upcoming book, but I will leave that argument aside for now. Independent of the sound production, I have over the years grown increasingly fond of Tellef Kvifte’s approach to separating between analogue and digital control mechanisms of musical instruments. Then one could argue that an acoustic piano is a digital instrument because it is based on discrete control (with separate keys for “C”, “D”, “E”…).
Four levels of technology research and usage
When it comes to music technologies, I often like to think of four different layers: basic research, applied research and development, usage, and various types of meta-perspectives. I have given some examples of what these may entail in the table below.
Basic research | Applied R&D | Usage | Meta-perspectives | |
---|---|---|---|---|
Music theory | Hardware | Instrument making | PedagogyPsychology | |
Music cognition | Software | Composing | Sociology | |
Musical interaction | Algorithms | Producing | History | |
… | Databases | Performing | Aesthetics | |
Network | Analysing | … | ||
Interaction design | … | |||
… |
Most of our research activities can be categorised as being on the basic research side (plus various types of applied R&D, although mainly at a prototyping stage) or on the meta-perspectives side. To generalise, one could say that the former is more “technology-oriented” while the latter is more “humanities-oriented.” That is a simplification of a complex reality, but it may suffice for now.
The problem is that many educational activities (ours and others) focus on the use of technologies. However, today’s kids don’t need to learn how to use technologies. Most agree that they are eager technology users from the start. It is much more critical that they learn more fundamental issues related to digitalisation and why technologies work the way they do.
Digital representation
Given the level of digitisation that has happened around us over the last decades, I am often struck by the lack of understanding of digital representation. By that, I mean a fundamental understanding of what a digital file contains and how its content ended up in a digital form. This also influences what can be done to the content. Two general examples:
- Text: even though the content may appear somewhat identical for those looking at a .TXT file versus a .DOCX/ODT file, these are two completely different ways of representating textual information.
- Numbers: storing numbers in a .DOCX/ODT table is completely different from storing the same numbers in a .XLSX/ODS file (or a .CSV file for that matter).
One can think about these as different file formats that one can convert between. But the underlying question is about what type of digital representation one wants to capture and preserve, which also influences what you can do to the content.
From a musical perspective, there are many types of digital representations:
- Scores: MIDI, notation formats, musicXML
- Audio: uncompressed vs. compressed formats, audio descriptor formats
- Video: uncompressed vs. compressed formats, video descriptor formats
- Sensor data: motion capture, physiological sensors, brain imagery
Students (and everyone else) need to understand what such digital representations mean and what they can be used for.
Algorithmic thinking
Computers are based on algorithms, a well-defined set of instructions for doing something. Algorithms can be written in computer code, but they can also be written with a pen on paper or drawn in a flow diagram. The main point is that algorithmic thinking is a particular type of reasoning that people need to learn. It is essential to understand that any complex problem can be broken down into smaller pieces that can be solved independently.
Not everyone will become programmers or software engineers, but there is an increased understanding that everyone should learn basic coding. Then algorithmic thinking is at the core. At UiO, this has been implemented widely in the Faculty for Mathematics and Natural Sciences through the Computing in Science Education. We don’t have a similar initiative in the Faculty of Humanities, but several departments have increased the number of courses that teach such perspectives.
Artificial Intelligence
There is a lot of buzz around AI, but most people don’t understand what it is all about. As I have written about several times on this blog (here and here), this makes people either overly enthusiastic or sceptical about the possibilities of AI. Not everyone can become an AI expert, but more people need to understand AI’s possibilities and limitations. We tried to explain that in the “AI vs Ary” project, as documented in this short documentary (Norwegian only):
The future is analogue
In all the discussions about digitisation and digital competency, I find it essential to remind people that the future is analogue. Humans are analogue; nature is analogue. We have a growing number of machines based on digital logic, but these machines contain many analogue components (such as the mechanical keys that I am typing this text on). Much of the current development in AI is bio-inspired, and there are even examples of new analogue computers. Understanding the limitations of digital technologies is also a competency that we need to teach our students.
All in all, I am optimistic about the future. There is a much broader understanding of the importance of digital competency these days. Still, we need to explain that this entails much more than learning how to use particular software or hardware devices. It is OK to learn such skills, but it is even more important to develop knowledge about how and why such technologies work in the first place.