02593nas a2200229 4500000000100000000000100001008004100002260001200043653001800055653001500073653002300088653002500111653002600136100002100162700001700183245007500200856008000275300001100355490000600366520197700372022001402349 2023 d c06/202310aDecision Tree10aE13B Fonts10aFeature Extraction10aImage Classification10aMultilayer Perceptron1 aChung-Hsing Chen1 aKo-Wei Huang00aDigit Recognition Using Composite Features With Decision Tree Strategy uhttps://www.ijimai.org/journal/sites/default/files/2023-05/ijimai8_2_10.pdf a98-1070 v83 aAt present, check transactions are one of the most common forms of money transfer in the market. The information for check exchange is printed using magnetic ink character recognition (MICR), widely used in the banking industry, primarily for processing check transactions. However, the magnetic ink card reader is specialized and expensive, resulting in general accounting departments or bookkeepers using manual data registration instead. An organization that deals with parts or corporate services might have to process 300 to 400 checks each day, which would require a considerable amount of labor to perform the registration process. The cost of a single-sided scanner is only 1/10 of the MICR; hence, using image recognition technology is an economical solution. In this study, we aim to use multiple features for character recognition of E13B, comprising ten numbers and four symbols. For the numeric part, we used statistical features such as image density features, geometric features, and simple decision trees for classification. The symbols of E13B are composed of three distinct rectangles, classified according to their size and relative position. Using the same sample set, MLP, LetNet-5, Alexnet, and hybrid CNN-SVM were used to train the numerical part of the artificial intelligence network as the experimental control group to verify the accuracy and speed of the proposed method. The results of this study were used to verify the performance and usability of the proposed method. Our proposed method obtained all test samples correctly, with a recognition rate close to 100%. A prediction time of less than one millisecond per character, with an average value of 0.03 ms, was achieved, over 50 times faster than state-of-the-art methods. The accuracy rate is also better than all comparative state-of-the-art methods. The proposed method was also applied to an embedded device to ensure the CPU would be used for verification instead of a high-end GPU. a1989-1660