<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.2.0">Jekyll</generator><link href="http://annaxambo.me/feed.xml" rel="self" type="application/atom+xml" /><link href="http://annaxambo.me/" rel="alternate" type="text/html" /><updated>2026-01-05T17:50:59+01:00</updated><id>http://annaxambo.me/feed.xml</id><title type="html">Anna Xambó, PhD</title><subtitle>Personal website, blog and portfolio of Anna Xambó.
</subtitle><entry><title type="html">Reflecting on 2025</title><link href="http://annaxambo.me/blog/research/2026/01/04/reflecting-on-2025/" rel="alternate" type="text/html" title="Reflecting on 2025" /><published>2026-01-04T14:00:00+01:00</published><updated>2026-01-04T14:00:00+01:00</updated><id>http://annaxambo.me/blog/research/2026/01/04/reflecting-on-2025</id><content type="html" xml:base="http://annaxambo.me/blog/research/2026/01/04/reflecting-on-2025/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2026-01-04-reflecting-on-2025/2025-05-29-ICLC2025-Anna_Xambo_performance_sala_beckett-photo-by-Miquel-Martinez.jpg&quot; /&gt;
	&lt;figcaption&gt;Anna Xambó performing at the Sala Beckett, ICLC 2025 (May 29, 2025).
Photo by Miquel Martinez.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This year has been my second year at the Centre for Digital Music (C4DM), Queen Mary University of London (QMUL) as a Senior Lecturer in Sound and Music Computing.  It is also the second and last year of the &lt;a href=&quot;https://sensingtheforest.github.io&quot;&gt;AHRC-funded Sensing the Forest project&lt;/a&gt;, which I am the principal investigator. We are now concluding this chapter by releasing code repositories and datasets; analysing the data; and producing the final research outputs. It has been an amazing and intense journey! This is also the year when I started the research lab &lt;a href=&quot;https://compsonicartslab.github.io/&quot;&gt;Computational Sonic Arts Laboratory&lt;/a&gt; at C4DM/QMUL.&lt;/p&gt;

&lt;p&gt;Following my previous year reflections (&lt;a href=&quot;/blog/research/2020/12/29/reflecting-on-2020/&quot;&gt;2020&lt;/a&gt;, &lt;a href=&quot;/blog/research/2021/12/31/reflecting-on-2021/&quot;&gt;2021&lt;/a&gt;, &lt;a href=&quot;/blog/research/2022/12/31/reflecting-on-2022/&quot;&gt;2022&lt;/a&gt;, &lt;a href=&quot;/blog/research/2024/01/20/reflecting-on-2023/&quot;&gt;2023&lt;/a&gt;, and &lt;a href=&quot;/blog/research/2025/02/08/reflecting-on-2024/&quot;&gt;2024&lt;/a&gt;), in this blog post, I reflect on how this year has been in terms of key activities and outputs, as well as discuss my intentions for 2026.&lt;/p&gt;

&lt;h2 id=&quot;research&quot;&gt;Research&lt;/h2&gt;

&lt;p&gt;In 2025, my research has focused on the AHRC Sensing the Forest project and publications related to PhD and master’s students. I have co-authored one journal article, four conference papers, an abstract with proceedings, and a position paper:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Zheng, S. J., &lt;strong&gt;Xambó, A.&lt;/strong&gt;, Bryan-Kinns, N. (2025). &lt;a href=&quot;https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2025.1575202/abstract&quot;&gt;“Exploring gestural affordances in audio latent space navigation”&lt;/a&gt;. Frontiers in Computer Science, Sec. Human-Media Interaction, Volume 7 - 2025, &lt;a href=&quot;https://doi.org/10.3389/fcomp.2025.1575202&quot;&gt;https://doi.org/10.3389/fcomp.2025.1575202&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Zhang, Q., Barthet, M., &lt;strong&gt;Xambó, A.&lt;/strong&gt;. (2025) &lt;a href=&quot;https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/113712/Zhang%20From%20Shape%20to%20Music%202025%20Accepted.pdf&quot;&gt;“From Shape to Music: Contour-Conditioned Symbolic Music Generation”&lt;/a&gt;. &lt;em&gt;Proceedings of the International Conference on Technologies for Music Notation and Representation (TENOR 2025)&lt;/em&gt;. Central Conservatory of Music Beijing, Beijing, China.&lt;/li&gt;
  &lt;li&gt;O’Flaherty, T. F., Elmokadem, M. B., Xu, X., Manjunatha, K. N., Roma, G., Xenakis, G., &lt;strong&gt;Xambó, A.&lt;/strong&gt;, (2025) &lt;a href=&quot;https://zenodo.org/records/17642480&quot;&gt;“Sonification Mappings for Sensing Tree Stress: A DIY Approach”&lt;/a&gt;. &lt;em&gt;Proceedings of the Web Audio Conference 2025 (WAC 2025)&lt;/em&gt;. Ircam/Mozilla, Paris, France.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Xambó, A.&lt;/strong&gt;, Roma, G. (2025) &lt;a href=&quot;https://zenodo.org/records/15527968&quot;&gt;Building a Dataset of Personal Live Coding Style Using MIRLCaProxy - A Journal of Creative Sonic Exploration under Constraints and Biases&lt;/a&gt;. In &lt;em&gt;Proceedings of the International Conference of Live Coding&lt;/em&gt;.&lt;/li&gt;
  &lt;li&gt;O’Flaherty, T. F., Marino, L., Saitis, C., &lt;strong&gt;Xambó, A.&lt;/strong&gt; (2025) &lt;a href=&quot;https://www.nime.org/proc/nime2025_67/&quot;&gt;Sonicolour: Exploring Colour Control of Sound Synthesis with Interactive Machine Learning&lt;/a&gt;. In &lt;em&gt;Proceedings of the New Interfaces for Musical Expression&lt;/em&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Xambó, A.&lt;/strong&gt; (2025) &lt;a href=&quot;https://zenodo.org/records/15283062&quot;&gt;Live Coding a Chorale of Sounds Using MIRLCa: State of Affairs and Implications&lt;/a&gt;. In Proceedings of the &lt;em&gt;SuperCollider Symposium 2025&lt;/em&gt;. Johns Hopkins University. Washington D.C., USA.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Xambó, A.&lt;/strong&gt;, Batchelor, P., Marino, L., Roma, G., Bell, M., Xenakis, G. (2025) &lt;a href=&quot;https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/110219/Xambo%20Soundscape-based%20music%202025%20Accepted.pdf?sequence=2&quot;&gt;“Soundscape-based music and creative AI: Insights and promises”&lt;/a&gt;. &lt;em&gt;UK AI Research Symposium (UKAIRS ’25)&lt;/em&gt;, 8-9 September 2025, Newcastle, UK.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, I have been involved as a guest co-editor of the following special issue, which has included 15 publications on the topic of embodied perspectives on sound and music AI:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Erdem, Ç., &lt;strong&gt;Xambó, A.&lt;/strong&gt;, Serafin, S., Griwodz, C. (2025). &lt;a href=&quot;https://www.frontiersin.org/research-topics/64735/embodied-perspectives-on-sound-and-music-ai&quot;&gt;Special Issue on Embodied Perspectives on Sound and Music AI&lt;/a&gt;. &lt;em&gt;Frontiers in Computer Science&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;teaching&quot;&gt;Teaching&lt;/h2&gt;

&lt;p&gt;This year, I completed the Postgraduate Certificate in Academic Practice (PGCAP) to become, since summer 2025, a Fellow at the Higher Education Academy. Similar to last year, I have contributed to the following two modules as a module organiser and lecturer:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;ECS742P Interactive Digital Multimedia Techniques (Autumn 2025)&lt;/li&gt;
  &lt;li&gt;ECS637U/ECS757P Digital Media and Social Networks (Spring 2025)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As part of the Interactive Digital Multimedia Techniques (IDMT), we organised a concert at the end of the module featuring the instruments built by the students during the module.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2026-01-04-reflecting-on-2025/IDMT-poster-2025.jpg&quot; /&gt;
	&lt;figcaption&gt;Poster for the IDMT 2025 Concert.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id=&quot;music&quot;&gt;Music&lt;/h2&gt;

&lt;p&gt;This year, I was able to materialise the Sensing the Forest performance twice:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Xambó, Anna (November 19, 2025). Sensing the Alice Holt Forest, live performance. Web Audio Conference 2025, IRCAM/Mozilla, Paris, France.&lt;/li&gt;
  &lt;li&gt;Xambó, Anna (May 29, 2025). &lt;a href=&quot;https://iclc.toplap.org/2025/catalogue/performance/sensing-the-alice-holt-forest.html&quot;&gt;Sensing the Alice Holt Forest&lt;/a&gt;, live performance. 9th International Conference on Live Coding (ICLC2025), 27 May 2025 - 31 May 2025, Barcelona, Spain.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;community&quot;&gt;Community&lt;/h2&gt;

&lt;p&gt;Several activities have involved community building: WHEN activities, invited talks, conference organisation, the research lab activities, PhD student examinations and public engagement activities.&lt;/p&gt;

&lt;h3 id=&quot;when&quot;&gt;WHEN&lt;/h3&gt;

&lt;p&gt;We had a second round of funding (ERIC Fund 2nd round 2024/25) to carry on with the activities of the &lt;a href=&quot;https://when.eecs.qmul.ac.uk&quot;&gt;EECS Women in Higher Education Network (WHEN)&lt;/a&gt;. We have celebrated togetherness with the WHEN members going to yoga, zumba, painting in the dark, silent disco, summer aperitif, cinema, workshops, science events, networking events, tea party, and meetups. Together with Katja Ivanova, we were interviewed by Aurélie Leroy for the School of Electronic Engineering and Computer Science News: &lt;a href=&quot;https://www.qmul.ac.uk/eecs/news-and-events/news/items/empowering-women-in-tech-at-queen-mary-the-story-behind-when.html&quot;&gt;Empowering women in tech at Queen Mary: the story behind WHEN&lt;/a&gt;.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2026-01-04-reflecting-on-2025/WHEN-founders.png&quot; /&gt;
	&lt;figcaption&gt;Katja Ivanova and Anna Xambó representing WHEN.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&quot;invited-talks&quot;&gt;Invited talks&lt;/h3&gt;

&lt;p&gt;In 2025, I have been invited to give several talks. Among all of them, I would like to highlight the invited keynote at the International Conference on Live Coding 2025 in Barcelona:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(February 28, 2025). Invited talk &quot;Intersecting Sonic Arts &amp;amp; Computing: A Portfolio Journey&quot; at &lt;em&gt;Music Engineering Forum&lt;/em&gt;. University of Miami. Miami, FL, USA.&lt;/li&gt;
&lt;li&gt;(March 13, 2025). Oral Presenter (online): &lt;a href=&quot;https://supercollider-2025.github.io/paper-session-1/&quot;&gt;Live Coding a Chorale of Sounds Using MIRLCa: State of Affairs and Implications&quot;&lt;/a&gt;. &lt;em&gt;SuperCollider Symposium 2025&lt;/em&gt;. Johns Hopkins University. Washington D.C., USA. &lt;a href=&quot;https://supercollider-2025.github.io/papers/Xamb%C3%B3.pdf&quot;&gt;Paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;(May 14, 2025). Oral presenter with Peter Batchelor. Guest Lecture: “Sensing the Forest - UAL workshop”. Computer and Data Science and AI, UAL Creative Computing Institute, University of the Arts London, London, UK.&lt;/li&gt;
&lt;li&gt;(May 28, 2025). Invited keynote &lt;a href=&quot;https://iclc.toplap.org/2025/catalogue/keynote/keynote-anna-xambo.html&quot;&gt;“Liveness as an open work: an ongoing live-coding algorithmic journey”&lt;/a&gt; at the International Conference on Live Coding 2025 (ICLC 2025). &lt;/li&gt;
&lt;li&gt;(October 15, 2025). Oral Presenter: &quot;Sensing the Forest&quot;. Aix-Marseille University - Master in Acoustics and Musicology, Marseille, France. Online.&lt;/li&gt;
&lt;li&gt;(October 28, 2025). Oral Presenter (online): &quot;Sensing the Forest: Exploring Climate Change Through Soundscape Datasets from DIY Streamers at Alice Holt Forest&quot;. &lt;a href=&quot;https://blog.freesound.org/?p=2290&quot;&gt;Freesound Day programme&lt;/a&gt;, Barcelona and online. You can register following this &lt;a href=&quot;https://eventum.upf.edu/139645/detail/freesound-day.html&quot;&gt;link&lt;/a&gt; (attendance is free)&lt;/li&gt;
&lt;li&gt;(December 10, 2025). Oral Presenter: &lt;a href=&quot;https://sensingtheforest.github.io/2025/12/10/presentation-at-jean-golding-institute-december-10-2025/&quot;&gt;&quot;Sensing the Forest: A Multidisciplinary Exploration of Sound Data&quot;&lt;/a&gt;. &lt;em&gt;Turing Seminars&lt;/em&gt;, Wills Memorial Building, Jean Golding Institute, University of Bristol, Bristol, UK.&lt;/li&gt;
&lt;/ul&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2026-01-04-reflecting-on-2025/2025-05-28-ICLC2025-Anna_Xambo_keynote_photo_by_Santiago_Botero.jpg&quot; /&gt;
	&lt;figcaption&gt;Anna Xambó giving a keynote at UOC, ICLC 2025 (May 28, 2025).
Photo by Santiago Botero.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I have been also presenting posters at UKAIRS 2025 and WAC 2025:&lt;/p&gt;

&lt;li&gt;Poster presentation together with Gerard Roma: &lt;a href=&quot;https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/110219/Xambo%20Soundscape-based%20music%202025%20Accepted.pdf?sequence=2&quot;&gt;&quot;Soundscape-based music and creative AI: Insights and promises&quot;&lt;/a&gt;. &lt;em&gt;UK AI Research Symposium (UKAIRS ’25)&lt;/em&gt;, 8-9 September 2025, Newcastle, UK.&lt;/li&gt;
&lt;li&gt;Poster presentation together with Tug O&apos;Flaherty: &lt;a href=&quot;https://zenodo.org/records/17642480&quot;&gt;&quot;Sonification Mappings for Sensing Tree Stress: A DIY Approach&quot;&lt;/a&gt;. &lt;em&gt;Proceedings of the Web Audio Conference 2025 (WAC 2025)&lt;/em&gt;. Ircam/Mozilla, Paris, France.&lt;/li&gt;

&lt;h3 id=&quot;computational-sonic-arts-lab-meetups&quot;&gt;Computational Sonic Arts Lab meetups&lt;/h3&gt;

&lt;p&gt;I am very happy about the start of the monthly group meet-ups with the &lt;a href=&quot;https://compsonicartslab.github.io&quot;&gt;Computational Sonic Arts Laboratory (CSAL)&lt;/a&gt;, which started in October 2025. The research team is based in C4DM at QMUL, dedicated to advancing the intersection of sonic arts and cutting-edge technology. Apart from the meetups, we have launched a website to keep up with the latest news and announcements related to the group members.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2026-01-04-reflecting-on-2025/C4DM-CSAL-20251016-photo-v3-by-Shuoyang-Zheng.jpg&quot; /&gt;
	&lt;figcaption&gt;From left to right: Shuoyang Zheng, Lianganzi Wang, Anna Xambó Sedó, Nico García-Peguinho, Merlin Goldman and Jimena Arruti. Top, from left to right: Lina Bautista, Qiaoxi Zhang and Solomiya Moroz. Photo and photo composition by Shuoyang Zheng.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&quot;conference-organisation&quot;&gt;Conference organisation&lt;/h3&gt;

&lt;p&gt;In 2025, I have been involved in two conference as part of the organisation:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://iclc.toplap.org/2025&quot;&gt;International Conference on Live Coding 2025, UOC / TOPLAP Barcelona / Axolot Collective, Barcelona, May 27-31, 2025&lt;/a&gt;, as a member of the &lt;a href=&quot;https://iclc.toplap.org/2025/about.html&quot;&gt;Papers Committee&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://wac-2025.ircam.fr/&quot;&gt;Web Audio Conference 2025, IRCAM / Mozilla, Paris, November 19-21, 2025&lt;/a&gt;, as a &lt;a href=&quot;https://wac-2025.ircam.fr/committee.html&quot;&gt;Paper Co-chair&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It has been a pleasure to contribute to the two conferences, both as an organiser and author. To pick one of the numerous highlights: we had an insightful conversation, sharing our experiences and proposing new perspectives at the Diversity Panel hosted by the Web Audio Conference 2025. With the participation of: Panagiota Anastasopoulou, Silvia Binda Heiserova, Nela Brown, Elaine Chew, Joana Chicau, Aliénor Golvet, Bruan Guarnieri, Treya Nash, Zeynep Özcan, Luisa Pereira, Clara Rigaud, Jessica A. Rodriguez, Suzanne Saint-Cast, Ariane Stolfi, Rochelle Tham, Eveline Vervliet, and Anna Xambó.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2026-01-04-reflecting-on-2025/wac2025-diversity-panel.jpg&quot; /&gt;
	&lt;figcaption&gt;Diversity Panel hosted by the Web Audio Conference 2025, IRCAM / Mozilla, Paris, France.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&quot;phd-examiner&quot;&gt;PhD examiner&lt;/h3&gt;

&lt;p&gt;I have been honoured to be a PhD examiner for the following PhD candidates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(November 5, 2025). Internal PhD examiner for Xiaojing Liu. PhD thesis title: &quot;Automatic Mixing for Teleconferencing, Gaming, and Live-Streaming&quot;. PhD degree in Computer Science; School of Electronic Engineering and Computer Science; Queen Mary University of London, UK.&lt;/li&gt;
&lt;li&gt;(March 6, 2025). Internal PhD examiner for Nicole Robson. PhD thesis title: []&quot;Human-Sound Interaction: The Relational Experience of (In)Audible Installation Art&quot;](https://qmro.qmul.ac.uk/xmlui/handle/123456789/111051). PhD degree in Media Arts and Technology; School of Electronic Engineering and Computer Science; Queen Mary University of London, UK.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;public-engagement&quot;&gt;Public engagement&lt;/h3&gt;

&lt;p&gt;I have also contributed to the AI &amp;amp; Music Hacklab powered by Sónar+D “AI Performance Playground”, and to the production of two podcasts related to the Sensing the Forest project:&lt;/p&gt;

&lt;li&gt;(December 17, 2025) &lt;a href=&quot;https://soundcloud.com/sensingtheforest/podcast-ep2&quot;&gt;Hear Nature Speak through Sound Installations, with Peter Batchelor and Chris Meigh-Andrews - at Alice Holt Forest, Hampshire&lt;/a&gt; (Episode 2) produced by Shuoyang Zheng. AHRC Sensing the Forest.&lt;/li&gt;
&lt;li&gt;(November 8, 2025) &lt;a href=&quot;https://soundcloud.com/sensingtheforest/podcast-ep1&quot;&gt;Hear Nature Speak through Sound Installations, A Podcast with Sensing the Forest Artists - at Alice Holt Forest, Hampshire&lt;/a&gt; (Episode 1) produced by Shuoyang Zheng. AHRC Sensing the Forest.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://sonar.es/en/news/apply-now-to-take-part-in-the-ai-performance-playground-an-ai-and-music-hacklab-powered-by-starts-deadline-28th-february&quot;&gt;Sónar+D Open Call: AI Performance Playground&lt;/a&gt;. Apply now to take part in the AI Performance Playground, an AI &amp;amp; Music Hacklab powered by S+T+ARTS. Deadline: 28th February.&lt;/li&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2026-01-04-reflecting-on-2025/sonarhacklab2025.jpg&quot; /&gt;
	&lt;figcaption&gt;From left to right: Eva Sjuve, Verônica Gesteira, Anna Xambó, Rubén Bañuelos, Ben Cantil.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;The &lt;a href=&quot;https://sonar.es/en/activity/ai-performance-playground-live&quot;&gt;Sónar+D “AI Performance Playground”&lt;/a&gt; was a blast. We co-facilitated the event initially with Peter Kirn (in the conceptualisation of the hacklab and selection of the candidates) and then together with Ben Cantil (DataMind) in the execution of the event, which has been curated by Antònia Folguera with the help of Gemma Belmonte. The hacklab took place between 11th and 14th June as part of Sónar+D 2025, powered by S+T+ARTS, with support from La Salle-URL. Artists, coders, musicians, DIY creators, and creative technologists were brought together to explore and deepen their use of machine learning tools, AI, and other related technologies for musical performance, which culminated in a collaborative performance at SonarÀgora, open to the general public at Sónar by Day. As part of the hacklab, we organised an enlightening talk panel with Rebecca Fiebrink (University of the Arts London), Nao Tokui (Neutone), Christopher Mitcheltree (Neutone/AIM/C4DM) and Shuoyang Zheng (AIM/C4DM), moderated by Ben Cantil (DataMind), to discuss the challenges and opportunities of being an artist using AI tools. For further info, you can read Shuoyang’s blog post on &lt;a href=&quot;https://www.aim.qmul.ac.uk/aim-at-sonard-2025/&quot;&gt;AIM at Sónar+D 2025&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In addition, I have been appointed technical co-director of the C4DM Studios, with the idea being to work in a team together with Johan Pauwels, Lewis Wostanholme, and Ciaran Corr, towards keeping the C4DM Studios active and accessible to all.&lt;/p&gt;

&lt;h2 id=&quot;other-achievements--press--expositions&quot;&gt;Other achievements / press / expositions&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;(November 1, 2025) &lt;a href=&quot;https://www.qmul.ac.uk/eecs/research/research-impact/case-study-sensing-the-forest/&quot;&gt;Sensing the forest: exploring climate change through sound, AI and art&lt;/a&gt;, Featured AI research case study series, EECS, QMUL.&lt;/li&gt;
  &lt;li&gt;(October 28, 2025) &lt;a href=&quot;https://www.qmul.ac.uk/eecs/news-and-events/news/items/empowering-women-in-tech-at-queen-mary-the-story-behind-when.html&quot;&gt;Empowering women in tech at Queen Mary: the story behind WHEN&lt;/a&gt;. In conversation with Katja Ivanova and Anna Xambó Sedó. Interview by Aurélie Leroy. News &amp;amp; Events, School of Electronic Engineering and Computer Science, Queen Mary University of London. &lt;a href=&quot;https://www.youtube.com/watch?v=D3lU4p1rxkw&quot;&gt;[Video]&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;(October 21, 2025) &lt;a href=&quot;https://soundcloud.com/sensingtheforest/podcast-ep1&quot;&gt;Hear Nature Speak through Sound Installations, A Podcast with Sensing the Forest Artists - at Alice Holt Forest, Hampshire&lt;/a&gt; (Episode 1) produced by Shuoyang Zheng. AHRC Sensing the Forest. The first episode presents an interview with Anna Xambó Sedó (Principal Investigator of Sensing the Forest, Queen Mary University of London) conducted by Hazel Stone (National Curator of Contemporary Art, Forestry England), and six pieces of voice commentary recorded by summer school artists Ed Chivers, Kate Anderson, Gabrielle Cerberville, Austin Blanton, Miles Scharff, and Luigi Marino.&lt;/li&gt;
  &lt;li&gt;(March 17, 2025) &lt;a href=&quot;https://www.qmul.ac.uk/gender-equality-directory-good-practice/eecs-women-higher-education-network-when/eecs-women-higher-education-network-when-.html&quot;&gt;Gender Equality Directory of Good Practice and Research: EECS Women Higher Education Network (WHEN)&lt;/a&gt;. In conversation with Katja Ivanova and Anna Xambó Sedó.&lt;/li&gt;
&lt;/ul&gt;

&lt;iframe width=&quot;680&quot; height=&quot;382&quot; src=&quot;https://www.youtube.com/embed/D3lU4p1rxkw?si=mGUVDfrrsDdZMv0o&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;

&lt;h2 id=&quot;travel&quot;&gt;Travel&lt;/h2&gt;

&lt;p&gt;Travelling is one of my favourites. This year, I have been travelling to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Oslo, Norway (3-5 March 2025, International visit with the AIM students at the University of Oslo).&lt;/li&gt;
  &lt;li&gt;Barcelona, Catalonia, Spain (27-31 May 2025, ICLC 2025; and 11-14 June 2025, Sonar+D).&lt;/li&gt;
  &lt;li&gt;Plymouth, UK and North of Spain (Cantabria, Asturias), in July 2025.&lt;/li&gt;
  &lt;li&gt;Newcastle, UK (8-9 September 2025, UKAIRS 2025).&lt;/li&gt;
  &lt;li&gt;Paris, France (19-21 November 2025, WAC 2025).&lt;/li&gt;
  &lt;li&gt;Highlands, Scotland, in November 2025.&lt;/li&gt;
  &lt;li&gt;Bristol, UK (10 December 2025, Turing seminars).&lt;/li&gt;
  &lt;li&gt;Begur, Costa Brava, Catalonia, in December 2025.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;new-years-resolutions&quot;&gt;New Year’s resolutions&lt;/h2&gt;

&lt;p&gt;This year has been all very exciting, but too roller-coaster at times. I am missing the ability to have more time to read classics and make music with new self-made tools. I think this should be possible, and it is, as a matter of fact, a question of being more efficient with the time organisation. Also, I am constantly advised to learn to say no. This is a must this year. I have been adapting Ryder Carroll’s Bullet Journal Method for some time now, but I think I should apply it even more systematically. So my main New Year’s resolution is to sacredly save regular time for my personal studio space, which I will temporarily call &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Experigardium&lt;/code&gt; – “the experimental garden.” I believe that this balance is essential to be successful in the long run.&lt;/p&gt;

&lt;p&gt;Happy New Year 2026!&lt;/p&gt;

&lt;h2 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h2&gt;

&lt;p&gt;I am thankful to be surrounded by talented and inspiring people at C4DM/EECS/QMUL, London. Special thanks to Gerard.&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="research" /><summary type="html">Anna Xambó performing at the Sala Beckett, ICLC 2025 (May 29, 2025). Photo by Miquel Martinez.</summary></entry><entry><title type="html">PhD Opportunity (QMUL S&amp;amp;E Underrepresented Group Studentships) - Collaborative Motion‑Based Hybrid Live Coding</title><link href="http://annaxambo.me/blog/announcements/2025/12/22/open-call-phd-opportunity-underrepresented-group-studentship/" rel="alternate" type="text/html" title="PhD Opportunity (QMUL S&amp;amp;E Underrepresented Group Studentships) - Collaborative Motion‑Based Hybrid Live Coding" /><published>2025-12-22T16:00:00+01:00</published><updated>2025-12-22T16:00:00+01:00</updated><id>http://annaxambo.me/blog/announcements/2025/12/22/open-call-phd-opportunity</id><content type="html" xml:base="http://annaxambo.me/blog/announcements/2025/12/22/open-call-phd-opportunity-underrepresented-group-studentship/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-12-22-phd-opportunity-qmul-s-e-underrepresented-group-studentships-collaborative-motion-based-hybrid-live-coding/Human-Robot-Music-Co‑Creation-Connected-Waves-Watercolor-Style.jpg&quot; /&gt;
	&lt;figcaption&gt;Image of Human-Robot Music Co‑Creation Connected&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&quot;about-the-phd-position&quot;&gt;About the PhD position&lt;/h3&gt;

&lt;p&gt;A position for UK/Home eligible students from Underrepresented Groups is available on the topic of “Collaborative Motion‑Based Hybrid Live Coding”.&lt;/p&gt;

&lt;p&gt;We are seeking prospective PhD applicants for a project on human–human and human–robot live coding performance and musical co‑creation via haptic communication. The research will investigate how a robot or a another human can perceive musical intent of a partner, generate its own “code” in real time, and perform in a shared creative environment. We are especially interested to see how this can be realised through motion features and haptic input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Co-supervisors&lt;/strong&gt;: Dr Ekaterina Ivanova &amp;amp; Dr Anna Xambó Sedó&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keywords&lt;/strong&gt;: live coding, human-robot interaction, haptic input&lt;/p&gt;

&lt;h3 id=&quot;application-deadline&quot;&gt;Application Deadline&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Application deadline: &lt;strong&gt;28th January 2026&lt;/strong&gt;
*Eligibility criteria and details of the scheme: &lt;a href=&quot;https://www.qmul.ac.uk/eecs/phd/phd-studentships/se-doctoral-research-studentships-202627-for-underrepresented-groups/&quot;&gt;https://www.qmul.ac.uk/eecs/phd/phd-studentships/se-doctoral-research-studentships-202627-for-underrepresented-groups/&lt;/a&gt;
How to apply: &lt;a href=&quot;https://www.qmul.ac.uk/eecs/phd/phd-studentships/se-doctoral-research-studentships-202627-for-underrepresented-groups/&quot;&gt;https://www.qmul.ac.uk/eecs/phd/phd-studentships/se-doctoral-research-studentships-202627-for-underrepresented-groups/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Interested? Contact us to discuss and tailor the topic to our joint interests.&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="announcements" /><summary type="html">Image of Human-Robot Music Co‑Creation Connected</summary></entry><entry><title type="html">Open Call PhD Position Funded by China Scholarship Council</title><link href="http://annaxambo.me/blog/announcements/2025/11/08/open-call-phd-position-funded-by-china-scholarship-council/" rel="alternate" type="text/html" title="Open Call PhD Position Funded by China Scholarship Council" /><published>2025-11-08T16:00:00+01:00</published><updated>2025-11-08T16:00:00+01:00</updated><id>http://annaxambo.me/blog/announcements/2025/11/08/open-call-phd-position</id><content type="html" xml:base="http://annaxambo.me/blog/announcements/2025/11/08/open-call-phd-position-funded-by-china-scholarship-council/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-11-08-open-call-phd-position-funded-by-china-scholarship-council/craiyon-geometric-patterns.jpg&quot; /&gt;
	&lt;figcaption&gt;Image generated using Craiyon AI.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&quot;about-the-phd-position&quot;&gt;About the PhD position&lt;/h3&gt;

&lt;p&gt;We are happy to announce an exciting PhD position to work on “Sonification techniques for understanding hidden processes of LLMs” at the &lt;a href=&quot;https://compsonicartslab.github.io&quot;&gt;Computational Sonic Arts Lab&lt;/a&gt;, &lt;a href=&quot;https://www.c4dm.eecs.qmul.ac.uk/&quot;&gt;Centre for Digital Music&lt;/a&gt;, School of Electronic Engineering and Computer Science, Queen Mary University of London, funded by the China Scholarship Council (CSC).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primary supervisor&lt;/strong&gt;: Dr Anna Xambó Sedó&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second supervisor&lt;/strong&gt;: Dr Charalamos Saitis&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topic&lt;/strong&gt;: Large language models (LLMs) are a type of artificial intelligence program that can recognise and generate text, which are trained on huge sets of data with a complex network of hidden processes. This PhD topic explores sonification techniques of LLMs for a better understanding of the way they process the information. Can we treat LLM engines such as ChatGPT as a musical instrument and listen to its internal processes? Can sonification techniques help us to hear and see how the information is processed? Compared to vinyl records or tape recordings, what is the acoustic signature, and what are the artefacts that are distinctive of this new medium? This work will contribute to addressing an important challenge in AI: making the inner workings and hidden knowledge of models more interpretable for people.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keywords&lt;/strong&gt;: sonification, large language models (LLMs), explainable AI&lt;/p&gt;

&lt;h3 id=&quot;application-deadline&quot;&gt;Application Deadline&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Application deadline&lt;/strong&gt;: 28th February 2026&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Eligibility criteria and details of the scheme&lt;/strong&gt;: &lt;a href=&quot;https://www.qmul.ac.uk/scholarships/items/china-scholarship-council-scholarships.html&quot;&gt;https://www.qmul.ac.uk/scholarships/items/china-scholarship-council-scholarships.html&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;How to apply&lt;/strong&gt;: &lt;a href=&quot;http://eecs.qmul.ac.uk/phd/how-to-apply/&quot;&gt;http://eecs.qmul.ac.uk/phd/how-to-apply/&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;You can find more info here:
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.findaphd.com/phds/program/csc-phd-studentships-in-electronic-engineering-and-computer-science/?i194p6295&quot;&gt;Findaphd&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.qmul.ac.uk/eecs/phd/phd-studentships/csc-phd-studentships-in-electronic-engineering-and-computer-science/&quot;&gt;EECS website&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For general enquiries, contact Mrs. Melissa Yeo m dot yeo at qmul.ac.uk (administrative enquiries) or Dr Arkaitz Zubiaga a dot zubiaga at qmul dot ac dot uk (academic enquiries) with the subject “EECS-CSC 2026 PhD scholarships enquiry”.&lt;/p&gt;

&lt;p&gt;For informal enquiries about the position, please contact me at a dot xambosedo at qmul dot ac dot uk.&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="announcements" /><summary type="html">Image generated using Craiyon AI.</summary></entry><entry><title type="html">My new lab</title><link href="http://annaxambo.me/blog/announcements/2025/05/16/my-new-lab/" rel="alternate" type="text/html" title="My new lab" /><published>2025-05-16T09:00:00+02:00</published><updated>2025-05-16T09:00:00+02:00</updated><id>http://annaxambo.me/blog/announcements/2025/05/16/my-new-lab</id><content type="html" xml:base="http://annaxambo.me/blog/announcements/2025/05/16/my-new-lab/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-05-16-my-new-lab/csal-logo.png&quot; /&gt;
	&lt;figcaption&gt;Computational Sonic Arts Laboratory logo&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I am very proud to announce that I have a research lab! The Computational Sonic Arts Laboratory (CSAL) aims to become a research hub in developing sustainable, inclusive, and forward-thinking technologies that transform how we create, experience, and understand music.&lt;/p&gt;

&lt;p&gt;The Computational Sonic Arts Laboratory is a research team based in the &lt;a href=&quot;https://www.c4dm.eecs.qmul.ac.uk/&quot;&gt;Centre for Digital Music&lt;/a&gt; (C4DM) at Queen Mary University of London dedicated to advancing the intersection of sonic arts and cutting-edge technology. The lab has been founded in 2025 as part of QMUL’s Centre for Digital Music.&lt;/p&gt;

&lt;p&gt;Rooted in principles of culture, creativity, and community, the lab explores sonic creativities and creative computing through innovative research in creative AI, music AI, and intelligent music systems. The vision of the lab is to bridge HCI, sound and music computing, and new interfaces for musical expression, by emphasising live coding, network music, and generative sound-based music.&lt;/p&gt;

&lt;p&gt;You can visit the lab website here: &lt;a href=&quot;https://compsonicartslab.github.io/&quot;&gt;compsonicartslab.github.io&lt;/a&gt;&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="announcements" /><summary type="html">Computational Sonic Arts Laboratory logo</summary></entry><entry><title type="html">Open Call PhD Position</title><link href="http://annaxambo.me/blog/announcements/2025/02/08/open-call-phd-position/" rel="alternate" type="text/html" title="Open Call PhD Position" /><published>2025-02-08T16:00:00+01:00</published><updated>2025-02-08T16:00:00+01:00</updated><id>http://annaxambo.me/blog/announcements/2025/02/08/open-call-phd-position</id><content type="html" xml:base="http://annaxambo.me/blog/announcements/2025/02/08/open-call-phd-position/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-08-open-call-phd-position/craiyon_162415_Nature_inspired_computing_for_sound_based_DIY_approaches_to_creative_AI_opt.png&quot; /&gt;
	&lt;figcaption&gt;Image generated using Craiyon AI.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;We are happy to announce an exciting PhD position to work on &lt;a href=&quot;https://www.findaphd.com/phds/project/nature-inspired-computing-for-sound-based-diy-approaches-to-creative-ai/?p182149&quot;&gt;“Nature-inspired computing for sound-based DIY approaches to creative AI”&lt;/a&gt; at the Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London. You will be part of a vibrant academic community with access to state-of-the-art facilities and mentorship from experienced researchers.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Application deadline&lt;/strong&gt;: 28th February 2025&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Requirements&lt;/strong&gt;: UK home student&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.qmul.ac.uk/eecs/phd/how-to-apply/&quot;&gt;How to apply&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;You can find more info &lt;a href=&quot;https://www.findaphd.com/phds/project/nature-inspired-computing-for-sound-based-diy-approaches-to-creative-ai/?p182149&quot;&gt;here&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For informal enquiries about the position, please contact me at a dot xambosedo at qmul dot ac dot uk&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="announcements" /><summary type="html">Image generated using Craiyon AI.</summary></entry><entry><title type="html">Reflecting on 2024</title><link href="http://annaxambo.me/blog/research/2025/02/08/reflecting-on-2024/" rel="alternate" type="text/html" title="Reflecting on 2024" /><published>2025-02-08T14:00:00+01:00</published><updated>2025-02-08T14:00:00+01:00</updated><id>http://annaxambo.me/blog/research/2025/02/08/reflecting-on-2024</id><content type="html" xml:base="http://annaxambo.me/blog/research/2025/02/08/reflecting-on-2024/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-08-reflecting-on-2024/StF-hackathon-nov-2024-anna-xambo-photo-by-Mahmoud Elmokadem.jpg&quot; /&gt;
	&lt;figcaption&gt;Anna Xambó at the AHRC Sensing the Forest Hackathon at Northern Research Station Edinburgh (November 13, 2024). Photo by Mahmoud Elmokadem.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;The date of this blog post, which was supposed to be published at the end of last year, tells already how busy it has been! It feels like time passes exponentially.&lt;/p&gt;

&lt;p&gt;Following my previous reflections on &lt;a href=&quot;/blog/research/2020/12/29/reflecting-on-2020/&quot;&gt;2020&lt;/a&gt;, &lt;a href=&quot;/blog/research/2021/12/31/reflecting-on-2021/&quot;&gt;2021&lt;/a&gt;, &lt;a href=&quot;/blog/research/2022/12/31/reflecting-on-2022/&quot;&gt;2022&lt;/a&gt; and &lt;a href=&quot;/blog/research/2024/01/20/reflecting-on-2023/&quot;&gt;2023&lt;/a&gt;, in this blog post, I reflect on how this year has been in terms of key activities and outputs, as well as discuss my intentions for 2025.&lt;/p&gt;

&lt;h2 id=&quot;research&quot;&gt;Research&lt;/h2&gt;

&lt;p&gt;This year, my research work has mostly focused on transferring the AHRC Sensing the Forest from De Montfort University to Queen Mary University of London and on executing the work packages with the project team, which has taken most of my research time.&lt;/p&gt;

&lt;p&gt;There has been also time for writing. This year, I have been involved in two journal articles, a conference paper, an abstract in proceedings, and a workshop position paper:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Xambó, A.&lt;/strong&gt;, Roma, G. (2024) &lt;a href=&quot;https://www.tandfonline.com/doi/full/10.1080/09298215.2024.2442355&quot;&gt;“Human-machine agencies in live coding for music performance”&lt;/a&gt;. Journal of New Music Research, 1–14 (Open Access).&lt;/li&gt;
  &lt;li&gt;Jawad, K., &lt;strong&gt;Xambó, A.&lt;/strong&gt; (2024) &lt;a href=&quot;https://www.frontiersin.org/articles/10.3389/fcomm.2023.1345124/full&quot;&gt;“Feminist HCI and narratives of design semantics in DIY music hardware”&lt;/a&gt;. Frontiers in Communication. 8:1345124 (Open Access).&lt;/li&gt;
  &lt;li&gt;Zheng, S., Del Sette, B.M., Saitis, C., &lt;strong&gt;Xambó, A.&lt;/strong&gt;, Bryan-Kinns, N. (2024) “Building Sketch-to-Sound Mapping with Unsupervised Feature Extraction and Interactive Machine Learning”. In &lt;em&gt;Proceedings of the New Interfaces for Musical Expression (NIME 24)&lt;/em&gt;. Utrecht, The Netherlands.&lt;/li&gt;
  &lt;li&gt;Marino, L., &lt;strong&gt;Xambó, A.&lt;/strong&gt; (2024) &lt;a href=&quot;https://static1.squarespace.com/static/6227c31a43daf21135453605/t/673e659f730d2433d5916462/1732142495610/21+Luigi+Marino+and+Anna+Xambo%CC%81.pdf&quot;&gt;Developing DIY Solar-Powered, Off-Grid Audio Streamers for Forest Soundscapes: Progress and Challenges&lt;/a&gt;. In Proceedings of the CHIME Annual One-day Music and HCI Conference 2024. The Open University, Milton Keynes, UK.&lt;/li&gt;
  &lt;li&gt;Zheng, S., &lt;strong&gt;Xambó, A.&lt;/strong&gt;, Bryan-Kinns, N. (2024) &lt;a href=&quot;https://ualresearchonline.arts.ac.uk/id/eprint/22115/1/XAIxArts_2024_paper_10.pdf&quot;&gt;“A Mapping Strategy for Interacting with Latent Audio Synthesis Using Artistic Materials”&lt;/a&gt;. 2nd international workshop on eXplainable AI for the Arts (XAIxArts) at the ACM Creativity and Cognition Conference.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;teaching&quot;&gt;Teaching&lt;/h2&gt;

&lt;p&gt;This year I have contributed to the following two modules while also doing the Postgraduate Certificate Academic Practice (PGCAP) to improve my teaching style (2024-ongoing):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;ECS742P Interactive Digital Multimedia Techniques (Autumn 2024)&lt;/li&gt;
  &lt;li&gt;ECS637U/ECS757P Digital Media and Social Networks (Spring 2024)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On 3 September 2024, I gave the invited tutorial &lt;a href=&quot;/blog/research/2025/02/06/tutorial-at-dafx24/&quot;&gt;Design strategies and techniques to better support collaborative, egalitarian and sustainable musical interfaces&lt;/a&gt; at the DAFx24 conference, University of Surrey, Guildford, UK.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-08-reflecting-on-2024/DAFx24-tutorial-anna-xambo-photo-by-DAFx24-team.jpg&quot; /&gt;
	&lt;figcaption&gt;Anna Xambó at the DAFx24 conference presenting a tutorial (September 3, 2024). Photo by the DAFx24 Team.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id=&quot;music&quot;&gt;Music&lt;/h2&gt;

&lt;p&gt;As part of the Interactive Digital Multimedia Techniques (IDMT), we organised a concert at the end of the module featuring the instruments built by the students during the module.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-08-reflecting-on-2024/IDMT-2024-concert.jpg&quot; /&gt;
	&lt;figcaption&gt;Poster for the IDMT 2024 Concert.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id=&quot;community&quot;&gt;Community&lt;/h2&gt;

&lt;h3 id=&quot;when&quot;&gt;WHEN&lt;/h3&gt;

&lt;p&gt;Together with Katja Ivanova, on March 15, 2024, we launched the &lt;a href=&quot;https://when.eecs.qmul.ac.uk&quot;&gt;EECS Women in Higher Education Network (WHEN)&lt;/a&gt;. During the year, we have organised a series of meetups and social events.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-08-reflecting-on-2024/when-logo.png&quot; width=&quot;400&quot; /&gt;
	&lt;figcaption&gt;WHEN logo.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&quot;invited-talks&quot;&gt;Invited talks&lt;/h3&gt;

&lt;p&gt;In 2024, I have been invited to give the following talks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(April 24, 2024). &quot;In the search for sound-based music using MIRLCa, a SuperCollider extension for live coding a coral of sounds&quot;. NOTAM SuperCollider Meetup with James Harkins and Anna Xambó. Online.&lt;/li&gt;    
&lt;li&gt;(March 13, 2024). Seminar talk &quot;Collaborative, Participatory and Practice-based Research Methods for Sound and Music Computing&quot;. CogSci seminar. Queen Mary University of London. London, UK.&lt;/li&gt;
&lt;li&gt;(February 22, 2024). Seminar talk &quot;Sound and Music Computing Goes Wild: From Communities to Ecosystems&quot;. AIM Forum. Centre for Digital Music. Queen Mary University of London. London, UK.&lt;/li&gt;
&lt;li&gt;(January 24, 2024). Seminar talk &quot;Reflections on the use of CC sounds in creative computing&quot;. Research Forum. CeReNeM: Centre for Research in New Music. University of Huddersfield. Huddersfield, UK.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;phd-examiner&quot;&gt;PhD examiner&lt;/h3&gt;

&lt;p&gt;I have been honoured to be PhD examiner of the following PhD candidates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(December 18, 2024). External PhD examiner for Aliénor Golvet. PhD thesis title: &quot;Distributed Music Systems: Designing Web-Based Tools for Artistic Research and Practice (Systèmes Musicaux Distribués: Concevoir des Outils Web pour la Recherche et la Pratique Artistique)&quot;. PhD degree in  Sciences et Technologies de la Musique et du Son (IRCAM · CNRS · Sorbonne Université), IRCAM, Paris, France.&lt;/li&gt;
&lt;li&gt;(August 14, 2024). External PhD examiner for CHEN Manni. PhD thesis title: &quot;Between Noise and Structure: Artificial Intelligence and Humans in Music Production&quot;. PhD degree in Creative Media; City University of Hong Kong, Hong Kong.&lt;/li&gt;
&lt;li&gt;(May 7, 2024). Internal PhD examiner for Elizabeth Wilson. PhD thesis title: &quot;Affective Live Coding: Fostering Human-Machine Collaboration with Autonomous Agents&quot;. PhD degree in Media Arts and Technology; School of Electronic Engineering and Computer Science; Queen Mary University of London, UK.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;public-engagement&quot;&gt;Public engagement&lt;/h3&gt;

&lt;p&gt;I have also contributed to panels of discussions, the opening of an art exhibition in the forest and the organisation of a hackathon in Edinburgh:&lt;/p&gt;

&lt;li&gt;(November 12-13, 2024). &lt;a href=&quot;https://sensingtheforest.github.io/2024/11/13/hackathon-at-northern-research-station-edinburgh-day-2/&quot;&gt;Hackathon at Northern Research Station Edinburgh&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;(June 20, 2024). &lt;a href=&quot;https://sensingtheforest.github.io/exhibition/&quot;&gt;Your Sonic Forest - Hear Nature Speak through Sound Installations in Alice Holt Forest&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;(13 June 2024) &lt;a href=&quot;https://sonar.es/en/activity/we-are-the-music-makers-ai-and-music-forum&quot;&gt;We are the music makers…&lt;/a&gt; with Anna Xambó (C4DM-QMUL), Magda Polo (UB EKHO), Rob Clouth, Sergi Jordà (UPF), and Thor Magnusson (Intelligent Instruments Lab). Moderated by Günseli Yalcinkaya (Dazed). Curated by Antònia Folguera. AI &amp;amp; Music Forum. Sonar Festival. Barcelona, Spain.
&lt;li&gt;(April 12, 2024). Participation in the Academic Panel on &lt;a href=&quot;https://www.soundoftomorrow.co.uk/&quot;&gt;The Development and Application of AI in Music&quot;&lt;/a&gt;. Hosted by ARU’s Dr. Sven-Amin Lembke. Panelists: Dr Anna Xambo Sedo, Dr Geraint A. Wiggins, Dr Oded Ben-Tal and Dr Robin Laney. &lt;em&gt;Sound of Tomorrow&lt;/em&gt;, Anglia Ruskin University&apos;s Helmore Recital Hall, Cambridge, UK.&lt;/li&gt;


&lt;br /&gt;





&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-08-reflecting-on-2024/SonarAgora_WeAreTheMusicMakers_photo_by_JosephJean-Marc.jpg&quot; /&gt;
	&lt;figcaption&gt;From left-right, Günseli Yalcinkaya, Thor Magnusson, Sergi Jordà, Rob Clouth, Anna Xambó and Magda Polo&apos;s team member. Group Photo by Joseph Jean Marc.&lt;/figcaption&gt;
&lt;/figure&gt;





&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-08-reflecting-on-2024/StF-group-photo-hackathon-day-1-photo-by-Mahmoud-Elmokadem.jpg&quot; /&gt;
	&lt;figcaption&gt;From left-right, Subhash Arockiadoss, Luigi Marino, Georgios Xenakis, Anna Xambó, Stanley Parker, Ning Liu, and Mahmoud Elmokadem. Group Photo by Mahmoud Elmokadem.&lt;/figcaption&gt;
&lt;/figure&gt;





&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-08-reflecting-on-2024/StF-summer-school-group-photo-by-Shuoyang-Zheng.jpg&quot; /&gt;
	&lt;figcaption&gt;Group Photo of the Summer School at Alice Holt Forest by Shuoyang Zheng.&lt;/figcaption&gt;
&lt;/figure&gt;



## Other achievements / press / expositions

* (June 7, 2024) [Unpopular opinion: AI won’t kill the music industry](https://www.dazeddigital.com/life-culture/article/62787/1/ai-won-t-kill-the-music-industry-sevdaliza-fka-twigs-holly-herndon). Article by Karma Peiró. Article by Günseli Yalcinkaya. DAZED.&lt;/li&gt;

&lt;h2 id=&quot;new-years-resolutions&quot;&gt;New Year’s resolutions&lt;/h2&gt;

&lt;p&gt;I am happy to announce the launch of the &lt;a href=&quot;https://compsonicartslab.github.io/&quot;&gt;Computational Sonic Arts Laboratory&lt;/a&gt;, a research team based in the Centre for Digital Music (C4DM) at Queen Mary University of London dedicated to advancing the intersection of sonic arts and cutting-edge technology.&lt;/p&gt;

&lt;p&gt;We are heading the final period of the &lt;a href=&quot;https://sensingtheforest.github.io&quot;&gt;AHRC Sensing the Forest&lt;/a&gt; project. We are very ambitious, but we also need to be realistic. This period is becoming the time to start completing prototypes and releasing outputs. We are working hard on those. Since I am at QMUL, the team has been expanding! I feel honoured to work with so many incredible minds.&lt;/p&gt;

&lt;p&gt;Happy New Year 2025!&lt;/p&gt;

&lt;h2 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h2&gt;

&lt;p&gt;Thanks to my colleagues at QMUL for helping me set up, for the opportunities, and for the vibrant conversations. Many thanks to the Sensing the Forest team for all the amazing moments and for working together with the challenges. Thank you to Gerard and my parents for their constant advice and help.&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="research" /><summary type="html">Anna Xambó at the AHRC Sensing the Forest Hackathon at Northern Research Station Edinburgh (November 13, 2024). Photo by Mahmoud Elmokadem.</summary></entry><entry><title type="html">Tutorial at DAFx24</title><link href="http://annaxambo.me/blog/research/2025/02/06/tutorial-at-dafx24/" rel="alternate" type="text/html" title="Tutorial at DAFx24" /><published>2025-02-06T14:00:00+01:00</published><updated>2025-02-06T14:00:00+01:00</updated><id>http://annaxambo.me/blog/research/2025/02/06/tutorial-at-dafx24</id><content type="html" xml:base="http://annaxambo.me/blog/research/2025/02/06/tutorial-at-dafx24/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24_LT.jpg&quot; /&gt;
	&lt;figcaption&gt;Location of the tutorial in the lecture theatre 03MS01, University of Surrey (September 3, 2024).&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This is a summary of the guest tutorial that I gave at &lt;a href=&quot;https://dafx24.surrey.ac.uk&quot;&gt;DAFx24&lt;/a&gt;, University of Surrey, UK.&lt;/p&gt;

&lt;p&gt;This was the brief outlined by Randall Ali, who curated the tutorials:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;DAFx is a community of people from academia and industry (150 people or so) that meet annually to discuss/present work related to digital audio signal processing for music and speech, design of digital audio effects, sound art, acoustics, and related applications. On the first day of the conference, we invite a few people from academia/industry to give a tutorial for about 2 hours, where particular concepts and/or techniques are taught from the ground up, often in a workshop style, but we leave that up to the speaker to decide how they want to do it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;Given that many in the community design audio effects and come up with algorithms to synthesize musical instruments, etc., one question that pops up a lot is how should musicians interact with these digital instruments/effects or what sorts of accomodations should one make when designing virtual instruments/effects so that it can be better controlled by the musician? With that in mind, we thought it would be great to have you give a tutorial in the area of real-time interactive systems, discussing some of the foundations and more recent technologies facilitiating this and how people in the community can use them in their practice, and all from your point of view of collaborative, egalitarian, and sustainable spaces. Like I mentioned, it doesn’t just have to be talking from slides, but you can make it hands-on as well, encouraging the participants to actively engage and learn the material as is typically done in tutorials.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The title and abstract of the tutorial was:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Tutorial #1 – Design strategies and techniques to better support collaborative, egalitarian and sustainable musical interfaces&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;A common challenge in the community of designing audio effects and algorithms to synthesise musical instruments is what are the design considerations to accommodate interaction experiences relevant to musicians, particularly among a diverse community of practitioners. This hands-on tutorial will cover some theoretical and practical foundations for designing interfaces for digital sound instruments and effects looking at how best to support collaborative, egalitarian and sustainable spaces.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The tutorial had 3 parts:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Part 1&lt;/strong&gt; - Interface design&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Part 2&lt;/strong&gt; - Mappings &amp;amp; user experience&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Part 3&lt;/strong&gt; - Data / References&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;part-1-interface-design&quot;&gt;Part 1: Interface design&lt;/h3&gt;

&lt;p&gt;In Part 1,  we discussed cultural aspects and design decisions around NIME designs. We looked at 10 DIY musical instruments created by women builders analysed in our paper &lt;em&gt;&lt;a href=&quot;https://www.frontiersin.org/articles/10.3389/fcomm.2023.1345124/full&quot;&gt;Jawad, K., &amp;amp; Xambó Sedó, A. (2024) “Feminist HCI and narratives of design semantics in DIY music hardware”, Frontiers in Communication, 8, 1345124&lt;/a&gt;&lt;/em&gt;, which uses the Feminist HCI framework from &lt;em&gt;&lt;a href=&quot;https://dl.acm.org/doi/10.1145/1753326.1753521&quot;&gt;Bardzell, S. (2010) “Feminist HCI: taking stock and outlining an agenda for design”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 1301–1310&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The ten instruments analysed in Jawad &amp;amp; Xambó (2024) are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Bell Controller by Stephanie Cheng Smith (2015)&lt;/li&gt;
  &lt;li&gt;Electronic_Khipu_ by Patricia Cadavid (2020)&lt;/li&gt;
  &lt;li&gt;The Exchange by Lori Napoleon (2012)&lt;/li&gt;
  &lt;li&gt;Mermy by Shan Ni (2020)&lt;/li&gt;
  &lt;li&gt;Prism Bell by Lia Mice (2019)&lt;/li&gt;
  &lt;li&gt;GramFX by Jassie Rios (2018)&lt;/li&gt;
  &lt;li&gt;Spring Spyre by Laetitia Sonami (2013)&lt;/li&gt;
  &lt;li&gt;Laser Koto by Miya Masaoka (2007)&lt;/li&gt;
  &lt;li&gt;SpaceTime Helix by Michela Pelusio (2012)&lt;/li&gt;
  &lt;li&gt;Soft Revolvers by Myriam Bleau (2014)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By prompting responses to 10 illustrations of the instruments without any reference and then 10 words of the names of the instruments without any reference, we navigated through the research question of &lt;em&gt;How fabulations of design semantics, through the lens of feminist HCI principles, can reshape our understanding of gender bias in object design within the realm of DIY musical instruments constructed by women builders?&lt;/em&gt;.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24_tagcloud1.jpg&quot; /&gt;
	&lt;figcaption&gt;Anonymous online responses to the question &apos;What do these 10 illustrations have in common?&apos; Platform used: Menti.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24_tagcloud2.jpg&quot; /&gt;
	&lt;figcaption&gt;Anonymous online responses to the question &apos;What do these 10 words have in common?&apos;. Platform used: Menti.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This led to talk about the following qualities of Feminist HCI:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Ecology/Materiality&lt;/strong&gt; - We read this quality as emphasising the interconnectedness of technology and the environment. Especially, what material solution has been chosen for the physical components and hardware of the instrument.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This part concluded with the following exercise:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Imagine/speculate a (re-)design for a musical interface focusing on shape, colour, and material/texture&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Below you can find two examples proposed by the workshop participants: an example of a musical interface based on strings and another example based on a string on a wooden corpus.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24-exercise1-example1.jpg&quot; /&gt;
	&lt;figcaption&gt;A sketch of a musical interface based on strings considering ecology/materiality.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24-exercise1-example2.jpg&quot; /&gt;
	&lt;figcaption&gt;A sketch of a musical interface based on a string on a wooden corpus considering ecology/materiality.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&quot;part-2-mappings--user-experience&quot;&gt;Part 2: Mappings &amp;amp; user experience&lt;/h3&gt;

&lt;p&gt;In Part 2, we discussed mappings and user experience (ways of involving users e.g. participatory design etc). This led to talk about two other qualities outlined in Feminist HCI:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Pluralism&lt;/strong&gt; - It is the quality of pluralism as emphasizing and recognizing diverse perspectives, experiences, and voices. It encourages the design of instruments representing the multiplicity of identities, cultures, and backgrounds. Each instrument has a unique approach.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Embodiment&lt;/strong&gt; - The approach undertaken considers quality in how the instruments can support and enhance embodied experiences, taking into account diverse abilities, gender expressions, and cultural practices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This part concluded with the following exercise:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Propose a mapping strategy from action to sound synthesis using your previous drawing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Below you can find some of the responses.&lt;/p&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24-exercise2-example1.jpg&quot; /&gt;
	&lt;figcaption&gt;A follow-up sketch of a musical interface based on strings presenting potential mappings considering materiality, pluralism and embodiment.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24-exercise2-example2.jpg&quot; /&gt;
	&lt;figcaption&gt;A sketch of a musical interface based on hugging a pillow presenting potential mappings considering materiality, pluralism and embodiment.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24-exercise2-example3.jpg&quot; /&gt;
	&lt;figcaption&gt;A sketch of a musical interface based on a baguette woodwind presenting potential mappings considering materiality, pluralism and embodiment. Photo by the DAFx24 Team.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2025-02-06-tutorial-at-dafx24/DAFx24-exercise2-example4.jpg&quot; /&gt;
	&lt;figcaption&gt;A sketch of a musical interface based on a jumping rope presenting potential mappings considering materiality, pluralism and embodiment.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&quot;part-3-data--references&quot;&gt;Part 3: Data / References&lt;/h3&gt;

&lt;p&gt;In Part 3, we discussed the relevance of what datasets with use and what references we cite by looking at data bias, bias and discrimination in AI systems, and citation practices. The discussion centred around the book &lt;a href=&quot;https://data-feminism.mitpress.mit.edu/&quot;&gt;D’Ignazio, C., &amp;amp; Klein, L. F. (2023). Data Feminism, MIT Press&lt;/a&gt;. Here, we had an open discussion about critically evaluating the datasets we use to identify potential biases and discuss the implications of these biases in data analysis and decision-making.&lt;/p&gt;

&lt;h2 id=&quot;resources&quot;&gt;Resources&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://dafx24.surrey.ac.uk/tutorials/&quot;&gt;Official webpage&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;/assets/zips/DAFx24.zip&quot;&gt;Slides and material&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h2&gt;

&lt;p&gt;Thanks to Randi Ali, Enzo De Sena, and the DAFx24 committee for the invitation and the challenge. Thank you to the workshop attendees for their participation in the discussion, tag clouds and sketches. Thanks also to Gerard Roma for his constant support and help. I enjoyed the conference! The diversity offered in the tutorials and keynotes was outstanding.&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="research" /><summary type="html">Location of the tutorial in the lecture theatre 03MS01, University of Surrey (September 3, 2024).</summary></entry><entry><title type="html">Reflecting on 2023</title><link href="http://annaxambo.me/blog/research/2024/01/20/reflecting-on-2023/" rel="alternate" type="text/html" title="Reflecting on 2023" /><published>2024-01-20T14:00:00+01:00</published><updated>2024-01-20T14:00:00+01:00</updated><id>http://annaxambo.me/blog/research/2024/01/20/reflecting-on-2023</id><content type="html" xml:base="http://annaxambo.me/blog/research/2024/01/20/reflecting-on-2023/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2024-01-20-reflecting-on-2023/adc23.jpg&quot; /&gt;
	&lt;figcaption&gt;Anna Xambó presenting at ADC23 (November 14, 2023). Photo by the ADC Team.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I have been trying to catch up with this blog post since early December 2023. On this occasion, it has not been possible until now. You will find out why at the end of this blog post.&lt;/p&gt;

&lt;p&gt;Anyhow, in this blog post, I reflect on how this year has been in terms of key activities and outputs, as well as propose my intentions for future achievements.&lt;/p&gt;

&lt;h2 id=&quot;research&quot;&gt;Research&lt;/h2&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2024-01-20-reflecting-on-2023/organised_sound.jpg&quot; width=&quot;300&quot; /&gt;
	&lt;figcaption&gt;Cover of the special issue &apos;Live Coding Sonic Creativities&apos; for Organised Sound.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This year has consolidated two projects that I have been working on for some time:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.cambridge.org/core/journals/organised-sound/latest-issue?sort=canonical.position%3Aasc&quot;&gt;Live Coding Sonic Creativities&lt;/a&gt;&lt;/strong&gt;: This special issue for Organised Sound, co-edited with Gerard Roma, Thor Magnusson and with the help of Leigh Landy, has been published. You can also find an article of mine about the &lt;a href=&quot;https://mirlca.dmu.ac.uk/&quot;&gt;MIRLCAuto project&lt;/a&gt;. You can read our editorial as well as my article, both Open Access, here:
    &lt;ul&gt;
      &lt;li&gt;Xambó, A., Roma, G., Magnusson, T. (eds.) (2023). &lt;a href=&quot;https://www.cambridge.org/core/journals/organised-sound/issue/C0C7BBEDEF8AEC11B0E23F26362A11A3&quot;&gt;Special Issue on Live Coding Sonic Creativities&lt;/a&gt;. Organised Sound. 28(2).
  Xambó, A. (2023). &lt;a href=&quot;https://www.cambridge.org/core/journals/organised-sound/article/discovering-creative-commons-sounds-in-live-coding/589A78F606268C256119823815213F06&quot;&gt;Discovering Creative Commons Sounds in Live Coding&lt;/a&gt;. Organised Sound (Open Access). 28(2): 276–289.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://sensingtheforest.github.io/&quot;&gt;Sensing the Forest&lt;/a&gt;&lt;/strong&gt;: &lt;a href=&quot;https://www.dmu.ac.uk/about-dmu/academic-staff/technology/peter-batchelor/peter-batchelor.aspx&quot;&gt;Peter Batchelor&lt;/a&gt; (DMU, LMS/MTI^2), &lt;a href=&quot;https://www.forestresearch.gov.uk/staff/matthew-wilkinson/&quot;&gt;Matthew Wilkinson&lt;/a&gt; (Forest Research), &lt;a href=&quot;https://www.forestresearch.gov.uk/staff/georgios-xenakis/&quot;&gt;Georgios Xenakis&lt;/a&gt; (Forest Research) and I as a Principal Investigator (DMU, LMS/MTI^2) were awarded an Arts and Humanities Research Council (AHRC) Early Career Grant for our project “Sensing the Forest: Let the Forest Speak using the Internet of Things, Acoustic Ecology and Creative AI”. The project will run from 2023 to 2025.
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://gtr.ukri.org/projects?ref=AH%2FX011585%2F1&quot;&gt;Sensing the Forest - Let the Forest Speak using the Internet of Things, Acoustic Ecology and Creative AI&lt;/a&gt; (UKRI official webpage)&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://sensingtheforest.github.io/&quot;&gt;Sensing the Forest - Project’s website&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://sensingtheforest.github.io/about/&quot;&gt;Sensing the Forest - Meet the team&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;music&quot;&gt;Music&lt;/h2&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2024-01-20-reflecting-on-2023/detuning-a-tuning.jpg&quot; width=&quot;400&quot; /&gt;
	&lt;figcaption&gt;detuning a tuning by Anna Xambó (Carpal Tunnel, 2023).&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;In 2023, I focused more on the music studio than on performing. I have released a solo album that was in the oven for a long time, as well as was honoured to participate in two special compilations. Also, my only two site-specific performances this year have occurred in two unique events and venues.&lt;/p&gt;

&lt;h3 id=&quot;album-release&quot;&gt;Album release&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://carpaltunnel.cat/releases/CT005.php&quot;&gt;detuning a tuning&lt;/a&gt;&lt;/strong&gt;: explores the sonic boundaries between tuning and detuning. The aural expedition consists of deconstructing audio recordings of Greg Chryssopoulos tuning a Kawai RX-6 grand piano at the School of Music, Georgia Tech, Atlanta, USA, recorded in February 2017. The album explores algorithmic music composition using SuperCollider with spectral modelling synthesis, mainly using the FluCoMa library for the latter. The recorded tuning process is listened to by self-built algorithms and used to control an abstract synthesiser. This process generates an organic evocative “décollage” of textural sounds merged with the recordings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;compilations&quot;&gt;Compilations&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;/music/compilations/#f-p-128&quot;&gt;Kicks &amp;amp; Cuts (remastered)&lt;/a&gt; (2:21) @ &lt;a href=&quot;https://soundcloud.com/femalepressure/fp-podcast-episode-128_ana-maria-romano&quot;&gt;f:p podcast episode 128_Ana Maria Romano G.&lt;/a&gt; (female:pressure, 5 January 2023) is a female:pressure podcast curated by Colombian artist Ana Maria Romano Gomez.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;/music/compilations/#loltrax001&quot;&gt;mnnw (extract)&lt;/a&gt; (3:55) @ &lt;a href=&quot;https://loleditions.bandcamp.com/album/loltrax001&quot;&gt;LOLTRAX001&lt;/a&gt; (December 1, 2023): Participation in the compilation LOLTRAX001 with the track mnnw (extract). LOL Editions. This compilation is curated by Joe Beedles and Guillaume Dujat. Recommended donation £8. Limited Edition Cassette £12. All proceeds are donated to MIND (UK-based mental health charity).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;solo-performances&quot;&gt;Solo Performances&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;/music/solo-performances/live-coding-+RAIN-2023/&quot;&gt;Ceci n’est pas une usine&lt;/a&gt;. +RAIN Film Fest, Universitat Pompeu Fabra/Sonar+D, Barcelona, Spain. June 14, 2023. Inspired by the location of the event at Ca l’Aranyó, a cotton factory from the 19th century that became a university audiovisual department in the 1980s, as well as the current ubiquitous AI turn, this session sonically inspects technological transformations from mechanical to automatic to synthesised to imagined soundscapes through live coding. This was presented in the LIVE event that was part of the 1st +RAIN Film Fest featuring works of Albert Barqué-Duran, Anna Xambó, Finding Light in the Distortion, Jennifer Walshe and Nao Tokui.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;/music/solo-performances/live-coding-I2MT-2023/&quot;&gt;(un)pack&lt;/a&gt;. I2MT Inaugural Concert, University of Nottingham, UK. November 30, 2023. The Interactive &amp;amp; Intelligent Music Technologies Research Group (I2MT) Inaugural Concert featured music works by Juan Martínez Ávila, Steve Benford, Craig Vear, John Richards (AKA Dirty Electronics) and special guest Anna Xambó. This live coding session investigates the transient everyday sound of packing and unpacking. The visible code will reveal the improvisational nature of the session, in which the use of own recordings will be enhanced with an augmented AI-powered digital sampler.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;teaching&quot;&gt;Teaching&lt;/h2&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2024-01-20-reflecting-on-2023/interacting-with-bed-of-nails.jpg&quot; width=&quot;500&quot; /&gt;
	&lt;figcaption&gt;Two students interact with the Bed of Nails.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I’ve been exploring teaching audio electronics using a range of different technologies. You can read my blog post about it here:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;/blog/teaching/2023/12/10/teaching-audio-electronics-using-hybrid-technologies/&quot;&gt;Teaching Audio Electronics using Hybrid Technologies&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;community&quot;&gt;Community&lt;/h2&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2024-01-20-reflecting-on-2023/livecodera.jpg&quot; width=&quot;400&quot; /&gt;
	&lt;figcaption&gt;LivecoderA&apos;s flyer.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;In 2023, I have been invited to give the following talks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(March 3, 2023). Guest talk about NIMEs as part of the workshop &lt;a href=&quot;https://www.eventbrite.co.uk/e/introduction-to-making-digital-music-instruments-2-x-6-hour-workshops-tickets-453842243367&quot;&gt;Introduction to Making Digital Music Instruments&lt;/a&gt;. LEADD:NG. University of Nottingham.&lt;/li&gt;
&lt;li&gt;(June 7, 2023). Guest talk &lt;a href=&quot;https://github.com/ctechfilmuniversity/lecture_sose23_ppr/blob/main/README.md&quot;&gt;About NIME, NIMEness, and Speculative Futures&lt;/a&gt;. Creative Technologies II, Creative Technologies Master&apos;s Program, Film Universität Babelsberg Konrad Wolf, Potsdam, Germany.&lt;/li&gt;
&lt;li&gt;(October 25, 2023). Seminar talk &quot;Sensing the Forest: In the search for epistemological cross-pollination between forest research, HCI and SMC&quot;. &lt;a href=&quot;https://www.chime.ac.uk/chime-seminar-chris-nash-and-anna-xambo&quot;&gt; Online CHIME Seminar with Chris Nash and Anna Xambó&lt;/a&gt;. CHIME Seminar, Online. You can read a blog post about the CHIME presentation &lt;a href=&quot;https://sensingtheforest.github.io/2023/11/01/presentation-at-chime-october-25-2023/&quot;&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;(November 14, 2023). Keynote &lt;a href=&quot;https://adc23.sched.com/event/1PudY?iframe=no&quot;&gt;From NIME to NISE: Rethinking the design and evaluation of musical interfaces&lt;/a&gt;. Audio Developer Conference 2023 (ADC23), London, UK. You can read a blog post about the CHIME presentation &lt;a href=&quot;https://sensingtheforest.github.io/2023/12/18/presentation-at-adc23-november-14-2023/&quot;&gt;here&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;(December 15-16, 2023). Panellist of the panels &quot;AI and Ethics&quot; and &quot;AI in Performance&quot; at the symposium &lt;a href=&quot;https://eveeno.com/ai-in-music-symposium&quot;&gt;&quot;AI in Music (AINMUSIC): Agency, Performance, Production and Perception&quot;&lt;/a&gt;, University of Music Trossingen/KISS project, Trossingen, Germany / online.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other highlights are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;/blog/research/2023/04/16/100-blog-posts/&quot;&gt;100 Blog Posts&lt;/a&gt; - A celebration of my blogging by reaching a hundred blog posts and talking about it.&lt;/li&gt;
  &lt;li&gt;Champlin, A., Chicau, J., Corfiel, M., Knotts, S., Marie, M., Saladino, I., Xambó, A. (2023) &lt;a href=&quot;http://annaxambo.me/pub/Champlin_et_al_2023_livecodera_community_report.pdf&quot;&gt;“LivecoderA Community Report”&lt;/a&gt; (Community Report Paper). In Proceedings of the International Conference of Live Coding (ICLC 2023). Utrecht, The Netherlands. A joint effort to share with the live coding community the genesis and mission of the solidarity group &lt;a href=&quot;https://livecodera.glitch.me/&quot;&gt;LivecoderA&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;other-achievements--press--expositions&quot;&gt;Other achievements / press / expositions&lt;/h2&gt;

&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2024-01-20-reflecting-on-2023/curtis-roads.jpg&quot; width=&quot;300&quot; /&gt;
	&lt;figcaption&gt;Photo of the Preface to the Second Edition of Curtis Road&apos;s The Computer Music Tutorial. Photo by Angela Brennecke.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;ul&gt;
  &lt;li&gt;(August/September, 2023) Testimonial of how I am coping with and using AI in my musical practice, published in the article &lt;a href=&quot;https://www.furious.com/perfect/artificialintelligencemusic.html&quot;&gt;AI and the Future of Human Made Music&lt;/a&gt;. Introduction and Interviews by Jonas Vognsen. Perfect Sound Forever online music magazine.&lt;/li&gt;
  &lt;li&gt;Citation of &lt;em&gt;‘Xambó, A. (2018) “Who Are the Women Authors in NIME? - Improving Gender Balance in NIME Research”. In Proceedings of the New Interfaces for Musical Expression (NIME ’18)’&lt;/em&gt; appearing in the Preface to the Second Edition of &lt;a href=&quot;https://mitpress.mit.edu/9780262044912/the-computer-music-tutorial/&quot;&gt;Curtis Road’s The Computer Music Tutorial&lt;/a&gt; (The MIT Press, 2023).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;new-years-resolutions&quot;&gt;New Year’s resolutions&lt;/h2&gt;

&lt;p&gt;2024 promises to bring change and consolidation. Change because I have started a new job position as &lt;a href=&quot;http://www.eecs.qmul.ac.uk/people/profiles/annaxambosedo.html&quot;&gt;Senior Lecturer in Sound and Music Computing&lt;/a&gt; at the &lt;a href=&quot;https://c4dm.eecs.qmul.ac.uk/&quot;&gt;Centre for Digital Music (C4DM)&lt;/a&gt;, School of Electronic Engineering and Computer Science, Queen Mary University of London, which is very exciting!&lt;/p&gt;

&lt;p&gt;Consolidation because we are and continue working hard with the Sensing the Forest project so that we can deliver what we promised. I feel honoured to work with a great team and we are enjoying how the project is growing (like a forest!), which is very thrilling!&lt;/p&gt;

&lt;p&gt;In 2023, I took a 1-year hiatus from social media (Facebook, Instagram and Twitter), which has been useful and informs my next steps this upcoming year.&lt;/p&gt;

&lt;p&gt;Happy and Fruitful New Year 2024!&lt;/p&gt;

&lt;h2 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h2&gt;

&lt;p&gt;I would like to dedicate this blog post to my colleagues and family at De Montfort University, the &lt;a href=&quot;https://www.dmu.ac.uk/research/research-faculties-and-institutes/technology/mtirc/staff.aspx&quot;&gt;Music, Technology and Innovation – Institute of Sonic Creativity (MTI^2)&lt;/a&gt;, with special thanks to Leigh Landy for his constant guidance and mentorship. Thanks MTI^2 for 4 years of friendship and many more to come. Thanks to Angela Brennecke for sharing with me the Second Preface of Curtis Road’s CMT and celebrating diversity in music technology. Last but not least, thanks to Gerard for his constant support.&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="research" /><summary type="html">Anna Xambó presenting at ADC23 (November 14, 2023). Photo by the ADC Team.</summary></entry><entry><title type="html">Teaching Audio Electronics using Hybrid Technologies</title><link href="http://annaxambo.me/blog/teaching/2023/12/10/teaching-audio-electronics-using-hybrid-technologies/" rel="alternate" type="text/html" title="Teaching Audio Electronics using Hybrid Technologies" /><published>2023-12-09T13:00:00+01:00</published><updated>2023-12-09T13:00:00+01:00</updated><id>http://annaxambo.me/blog/teaching/2023/12/10/teaching-audio-electronics-using-hybrid-technologies</id><content type="html" xml:base="http://annaxambo.me/blog/teaching/2023/12/10/teaching-audio-electronics-using-hybrid-technologies/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2023-12-09-teaching-audio-electronics-using-hybrid-technologies/interacting-with-bed-of-nails.jpg&quot; /&gt;
	&lt;figcaption&gt;Two students interact with a Bed of Nails circuit.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This year, I have taught &lt;strong&gt;&lt;em&gt;DIY musical instruments&lt;/em&gt;&lt;/strong&gt; as part of the module &lt;em&gt;MTEC2001 Presentation &amp;amp; Promotion&lt;/em&gt;. MTEC2001 is an open module in which the students can select one specialisation to undertake a short intense project. The four specialisations have been: &lt;em&gt;DIY musical instruments&lt;/em&gt;, &lt;em&gt;installation art&lt;/em&gt;, &lt;em&gt;improvisation ensemble&lt;/em&gt; and &lt;em&gt;advanced synthesis and sequencing&lt;/em&gt;. This is an L5 (or 2nd) level undergraduate course that is part of the programmes &lt;em&gt;BA Music Technology&lt;/em&gt; and &lt;em&gt;BSc Music Production&lt;/em&gt; at De Montfort University (DMU).&lt;/p&gt;

&lt;p&gt;This module is in the context of DMU’s block teaching, which means that, for the electives, the students have 12 sessions of 3 hours each session distributed in 7 weeks. In this blog post, I will share how the electronics projects went and the lessons learned. Previously, I published two other blog posts on teaching audio electronics, &lt;a href=&quot;/blog/teaching/2021/05/29/teaching-audio-electronics-during-covid/&quot;&gt;Teaching Audio Electronics during COVID: Inventiveness and Opportunities&lt;/a&gt; (May 2021) and &lt;a href=&quot;http://localhost:4000/blog/research/2022/07/01/teaching-audio-electronics-post-covid-a-hybrid-project-led-approach/&quot;&gt;Teaching Audio Electronics Post-COVID - A Hybrid Project-Led Approach&lt;/a&gt; (July 2022). The main difference is that here we went back to on-site teaching, and the students voluntarily chose the electronics theme.&lt;/p&gt;

&lt;h2 id=&quot;course-ethos&quot;&gt;Course Ethos&lt;/h2&gt;

&lt;p&gt;Partly due to the availability of the electronics lab only once a week (half of the module), partly due to my previous teaching experience, I stuck to the following rules:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Go hybrid&lt;/strong&gt;: Present two routes of the electronics world: &lt;em&gt;using&lt;/em&gt;, and &lt;em&gt;not using&lt;/em&gt;, a microcontroller: &lt;em&gt;Arduino-based&lt;/em&gt; vs &lt;em&gt;Arduinoless&lt;/em&gt;. The more options the merrier.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Let the students speak&lt;/strong&gt;: Show previous works from DMU students and let the students be critical using a formal rubric.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Teach how to fish, not just give a fish&lt;/strong&gt;: Show students where to find the resources and who to ask.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Open your ears and listen!&lt;/strong&gt;: Inspired by Patrick McGinley and his &lt;a href=&quot;https://frameworkradio.net/intros/&quot;&gt;Framework Radio introduction&lt;/a&gt;, invite the students to be open-minded, and open their ears to listening to other musical styles (bearing in mind that more than 50% of the students of this cohort are guitarists).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Promote group and individual work&lt;/strong&gt;: Propose group activities as well as individual activities so that students can have both experiences.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;content&quot;&gt;Content&lt;/h2&gt;

&lt;p&gt;Here’s the content of the module based on hands-on activities tailored for the electronics lab and in the classroom.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Day&lt;/th&gt;
      &lt;th&gt;Class&lt;/th&gt;
      &lt;th&gt;Content&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;Classroom&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Introduction to the workshop&lt;/strong&gt; &lt;br /&gt;* What is a digital musical instrument? &lt;br /&gt;* The Victorian Synthesiser (John Bowers) &lt;br /&gt;* Good versus bad circuits&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;Lab&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;The tools&lt;/strong&gt;&lt;br /&gt;* Arduino hardware &lt;br /&gt;* Arduino software &lt;br /&gt;* Breadboard&lt;br /&gt;* Tinkercad&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;Classroom&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Coil pickups &amp;amp; contact mics&lt;/strong&gt;&lt;br /&gt;* What is circuit sniffing?&lt;br /&gt;* Contact microphones&lt;br /&gt;*How to make a DIY instrument with a contact microphone?&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;4&lt;/td&gt;
      &lt;td&gt;Lab&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Sensors and actuators I / Sound I (basics)&lt;/strong&gt; &lt;br /&gt;* Demo of the oscilloscope (by Prakash Patel)&lt;br /&gt;* Sensors and actuators I (push button, LED, LDR)&lt;br /&gt;* Sound I (piezo buzzer)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;5&lt;/td&gt;
      &lt;td&gt;Classroom&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Acoustic feedback systems &amp;amp; self-made cigar box guitars&lt;/strong&gt;&lt;br /&gt;* Acoustic feedback systems&lt;br /&gt;* Cigar box guitars (by George Silver)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6&lt;/td&gt;
      &lt;td&gt;Lab&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Sound II&lt;/strong&gt;&lt;br /&gt;* The exploding capacitor (by Prakash Patel)&lt;br /&gt; * Tone generator using 40106 chip (by Prakash Patel)&lt;br /&gt;* Sound II (piezo and light sensor, digital oscillator)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;7&lt;/td&gt;
      &lt;td&gt;Classroom&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Bed of Nails&lt;/strong&gt;&lt;br /&gt;* Get to know about John Richards (Dirty Electronics)&lt;br /&gt;* Learn to read schematics&lt;br /&gt;* Build a solderless circuit&lt;br /&gt;* Build an electronic musical instrument from start to end&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;8&lt;/td&gt;
      &lt;td&gt;Lab&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Controlling your sounds&lt;/strong&gt;&lt;br /&gt;* Revision of the chip 40106 - how to scale it up to six oscillators for sound effects (by Prakash Patel)&lt;br /&gt;* Revision of last week’s worksheet Sound II&lt;br /&gt;* Organisation of projects and presentations&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;9&lt;/td&gt;
      &lt;td&gt;Classroom&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Pitching your idea + Manufacturing lab visit&lt;/strong&gt;&lt;br /&gt;* Be ready to pitch your project idea&lt;br /&gt;* Visit the manufacturing lab at Queens (lead: Stephen Cliff)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10&lt;/td&gt;
      &lt;td&gt;Lab&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Working on your project&lt;/strong&gt;&lt;br /&gt;* Bloom’s taxonomy of learning&lt;br /&gt;* Discuss your project (tutorial-style)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;11&lt;/td&gt;
      &lt;td&gt;Classroom&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Working on your project&lt;/strong&gt;&lt;br /&gt;* Fritzing software for drawing a circuit of your project&lt;br /&gt;* Discuss your project (tutorial-style)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;12&lt;/td&gt;
      &lt;td&gt;Lab&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Presentations&lt;/strong&gt;&lt;br /&gt;* Setting up&lt;br /&gt;* Presentations and video recordings of your proof-of-concepts&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;outcomes&quot;&gt;Outcomes&lt;/h2&gt;

&lt;p&gt;The objective for the eight enrolled students has been to create a simple musical instrument using electronics and produce a video of the operation of the instrument. Seven out of the eight students were successful with their projects.&lt;/p&gt;

&lt;p&gt;The projects were varied:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;an Arduino-based DIY synthesiser based on PWM using faders, a touchstrip, a 3D-printed enclosure and a piezo buzzer/amplifier,&lt;/li&gt;
  &lt;li&gt;an Arduino-based DIY metronome using a knob, piezo buzzer, small OLED display, an ADAfruit Neopixel 8-bit LED,&lt;/li&gt;
  &lt;li&gt;a DIY microphone using an old telephone handset,&lt;/li&gt;
  &lt;li&gt;an Arduino-based DIY theremin using an LDR, a force sensor and a piezo buzzer/amplifier,&lt;/li&gt;
  &lt;li&gt;a DIY Fuzz Face Pedal from scratch using transistors, capacitors, resistors, potentiometers, switches and jacks (which included the re-use of the components from another handmade circuit by desoldering the components), and&lt;/li&gt;
  &lt;li&gt;an Arduino-based DIY piano using push buttons and a piezo buzzer/amplifier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;lessons-learned&quot;&gt;Lessons learned&lt;/h2&gt;

&lt;p&gt;Taking a hybrid technological approach worked well: each student selected their path in the world of DIY electronics. Some students like to code, others prefer the analogue world. Taking different paths and succeeding was possible thanks to the constant support and advice from the electronics lab technicians. The students have discovered a new research/professional path through interacting with the lab technicians and realising their projects.&lt;/p&gt;

&lt;p&gt;This has been a great student group (likely the best group) of audio electronics that I have had at DMU since I arrived in January 2020. It makes a difference if the students voluntarily pick the discipline they want to become experts in among a set of electives. Working with a small cohort is important here for a positive and customised learning experience. However, an open question is how sustainable is this approach if several modules are running in parallel in block teaching mode.&lt;/p&gt;

&lt;h2 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h2&gt;

&lt;p&gt;I am thankful to 3rd-year DMU student George Silver for giving a hands-on lecture about his self-made cigar box guitars, it was a blast for the students! Many thanks to Ashok Karavadra and Prakash Patel from the electronics lab at the Computing Engineering and Media (CEM) faculty for their constant help and technical support to both the lecturer and the students. Also thank you very much to Stephen Cliff from the CEM’s mechanical lab for showing the facilities and supporting the students with their projects. Thank you to Peter Batchelor for coordinating the entire module. Last but not least, thank you to the students for their constant engagement throughout the course.&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="teaching" /><summary type="html">Two students interact with a Bed of Nails circuit.</summary></entry><entry><title type="html">We Are Hiring! Postdoc in Sound and Music Computing</title><link href="http://annaxambo.me/blog/announcements/2023/07/28/we-are-hiring-postdoc-in-sound-and-music-computing/" rel="alternate" type="text/html" title="We Are Hiring! Postdoc in Sound and Music Computing" /><published>2023-07-28T15:00:00+02:00</published><updated>2023-07-28T15:00:00+02:00</updated><id>http://annaxambo.me/blog/announcements/2023/07/28/we-are-hiring-postdoc-in-sound-and-music-computing</id><content type="html" xml:base="http://annaxambo.me/blog/announcements/2023/07/28/we-are-hiring-postdoc-in-sound-and-music-computing/">&lt;figure class=&quot;&quot;&gt;
	&lt;img src=&quot;/assets/posts/2023-07-28-we-are-hiring-postdoc-in-sound-and-music-computing/sensing-the-forest.jpg&quot; /&gt;
	&lt;figcaption&gt;Sensing the forest. Let the forest speak.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;We are hiring! A part-time (50% FTE), 23-month Postdoctoral Research Fellow position is available to work at the forefront of Sound and Music Computing as part of the AHRC-funded project &lt;a href=&quot;/blog/announcements/2023/06/23/awarded-an-ahrc-early-career-grant/&quot;&gt;“Sensing the Forest - Let the Forest Speak using the Internet of Things, Acoustic Ecology and Creative AI”&lt;/a&gt; at De Montfort University (DMU) in Leicester, UK. The project is led by Anna Xambó Sedó (PI, DMU), Peter Batchelor (Co-I, DMU), Matthew Wilkinson (Co-I, Forest Research), and Georgios Xenakis (Co-I, Forest Research).&lt;/p&gt;

&lt;p&gt;The proposed project aims to raise awareness among forest visitors/aficionados, artists, scientists, and the general public about the connection between forests and climate change. Community building will centre on looking at a better understanding of forest behaviour using complex scientific data in creative and artistic ways.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Application link: &lt;a href=&quot;https://dmuhub.dmu.ac.uk:444/sap/bc/webdynpro/sap/hrrcf_a_posting_apply?PARAM=cG9zdF9pbnN0X2d1aWQ9MDA1MDU2QTU1QzY5MUVERThBRDk2Q0UxM0NEQURCNTImY2FuZF90eXBlPUVYVCY9&amp;amp;saml2=disabled&amp;amp;sap-client=900&amp;amp;sap-language=EN#&quot;&gt;https://dmuhub.dmu.ac.uk&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Application deadline: 13 August 2023&lt;/li&gt;
  &lt;li&gt;Interviews: 24-25 August 2023&lt;/li&gt;
  &lt;li&gt;Job start: 1 October 2023&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For informal enquiries about the position, please contact me at anna dot xambo at dmu dot ac dot uk&lt;/p&gt;</content><author><name>Anna Xambó</name></author><category term="announcements" /><summary type="html">Sensing the forest. Let the forest speak.</summary></entry></feed>