02370nas a2200253 4500000000100000000000100001008004100002260001200043653001800055653003900073653001000112653002600122653003800148653003300186100002800219700002000247700002200267245011600289856008100405300001000486490000600496520160000502022001402102 2021 d c12/202110aDeep Learning10aConvolutional Neural Network (CNN)10aMusic10aInformation Retrieval10aMusic Information Retrieval (MIR)10aSelf-Similarity Matrix (SSM)1 aCarlos Hernandez-Olivan1 aJose R. Beltran1 aDavid Diaz-Guerra00aMusic Boundary Detection using Convolutional Neural Networks: A Comparative Analysis of Combined Input Features uhttps://www.ijimai.org/journal/sites/default/files/2021-11/ijimai7_2_8_0.pdf a78-880 v73 aThe analysis of the structure of musical pieces is a task that remains a challenge for Artificial Intelligence, especially in the field of Deep Learning. It requires prior identification of the structural boundaries of the music pieces, whose structural boundary analysis has recently been studied with unsupervised methods and supervised neural networks trained with human annotations. The supervised neural networks that have been used in previous studies are Convolutional Neural Networks (CNN) that use Mel-Scaled Log-magnitude Spectograms features (MLS), Self-Similarity Matrices (SSM) or Self-Similarity Lag Matrices (SSLM) as inputs. In previously published studies, pre-processing is done in different ways using different distance metrics, and different audio features are used for computing the inputs, so a generalised pre-processing method for calculating model inputs is missing. The objective of this work is to establish a general method to pre-process these inputs by comparing the results obtained by taking the inputs calculated from different pooling strategies, distance metrics and audio characteristics, also taking into account the computing time to obtain them. We also establish the most effective combination of inputs to be delivered to the CNN to provide the most efficient way to extract the boundaries of the structure of the music pieces. With an adequate combination of input matrices and pooling strategies, we obtain an accuracy F1 of 0.411 that outperforms a current work done under the same conditions (same public available dataset for training and testing). a1989-1660