<doi_batch xmlns="http://www.crossref.org/schema/4.4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="4.4.0"><head><doi_batch_id>e99cbdb1-16cd-4623-9bb8-6055e23a4bda</doi_batch_id><timestamp>20220330092110024</timestamp><depositor><depositor_name>naun:naun</depositor_name><email_address>mdt@crossref.org</email_address></depositor><registrant>MDT Deposit</registrant></head><body><journal><journal_metadata language="en"><full_title>International Journal of Circuits, Systems and Signal Processing</full_title><issn media_type="electronic">1998-4464</issn><archive_locations><archive name="Portico"/></archive_locations><doi_data><doi>10.46300/9106</doi><resource>http://www.naun.org/cms.action?id=3029</resource></doi_data></journal_metadata><journal_issue><publication_date media_type="online"><month>1</month><day>7</day><year>2022</year></publication_date><publication_date media_type="print"><month>1</month><day>7</day><year>2022</year></publication_date><journal_volume><volume>16</volume><doi_data><doi>10.46300/9106.2022.16</doi><resource>https://npublications.com/journals/circuitssystemssignal/2022.php</resource></doi_data></journal_volume></journal_issue><journal_article language="en"><titles><title>Application of Speech Recognition Technology in Chinese English Simultaneous Interpretation of Law</title></titles><contributors><person_name sequence="first" contributor_role="author"><given_name>Xiao</given_name><surname>Yang</surname><affiliation>College of Foreign Languages, Xijing University, Xi’an 710123, China</affiliation></person_name></contributors><jats:abstract xmlns:jats="http://www.ncbi.nlm.nih.gov/JATS1"><jats:p>Speech recognition is an important research field in natural language processing. In Chinese and English, which have rich data resources, the performance of end-to-end speech recognition model is close to that of Hidden Markov Model—Deep Neural Network (HMM-DNN) model. However, for the low resource speech recognition task of Chinese English hybrid, the end-to-end speech recognition system does not achieve good performance. In the case of limited mixed data between Chinese and English, the modeling method of end-to-end speech recognition is studied. This paper focuses on two end-to-end speech recognition models: connection timing distribution and attention based codec network. In order to improve the performance of Chinese English hybrid speech recognition, this paper studies how to improve the performance of the coder based on connection timing distribution model and attention mechanism, and tries to combine the two models to improve the performance of Chinese English hybrid speech recognition. In low resource Chinese English mixed data, the advantages of different models are used to improve the performance of end-to-end models, so as to improve the recognition accuracy of speech recognition technology in legal Chinese English simultaneous interpretation.</jats:p></jats:abstract><publication_date media_type="online"><month>3</month><day>30</day><year>2022</year></publication_date><publication_date media_type="print"><month>3</month><day>30</day><year>2022</year></publication_date><pages><first_page>956</first_page><last_page>963</last_page></pages><publisher_item><item_number item_number_type="article_number">117</item_number></publisher_item><ai:program xmlns:ai="http://www.crossref.org/AccessIndicators.xsd" name="AccessIndicators"><ai:free_to_read start_date="2022-03-30"/><ai:license_ref applies_to="am" start_date="2022-03-30">https://npublications.com/journals/circuitssystemssignal/2022/c382005-117(2022).pdf</ai:license_ref></ai:program><archive_locations><archive name="Portico"/></archive_locations><doi_data><doi>10.46300/9106.2022.16.117</doi><resource>https://npublications.com/journals/circuitssystemssignal/2022/c382005-117(2022).pdf</resource></doi_data><citation_list><citation key="ref0"><doi>10.4218/etrij.2019-0400</doi><unstructured_citation>Oh Y R, Park K, Jeon H B, et al. Automatic proficiency assessment of Korean speech read aloud by non-natives using bidirectional LSTM-based speech recognition. ETRI Journal, 2020, 42(10):59-64. </unstructured_citation></citation><citation key="ref1"><doi>10.1038/s41467-020-16956-5</doi><unstructured_citation>Hovsepyan S, Olasagasti I, Giraud A L. Combining predictive coding and neural oscillations enables online syllable recognition in natural speech. Nature Communications, 2020, 11(1):78-84. </unstructured_citation></citation><citation key="ref2"><doi>10.3390/s19163481</doi><unstructured_citation>Cabral F S, Fukai H, Tamura S. Feature extraction methods proposed for speech recognition are effective on road condition monitoring using smartphone inertial sensors. Sensors, 2019, 19(16):3481-3488. </unstructured_citation></citation><citation key="ref3"><doi>10.1515/jisys-2018-0417</doi><unstructured_citation>Kumar A, Aggarwal R K. Discriminatively trained continuous Hindi speech recognition using integrated acoustic features and recurrent neural network language modeling. Journal of Intelligent Systems, 2020, 30(1):165-179. </unstructured_citation></citation><citation key="ref4"><doi>10.1109/tmm.2020.2976493</doi><unstructured_citation>Liu L, Feng G, Beautemps D, et al. Re-synchronization using the hand preceding model for Multi-modal fusion in automatic continuous cued speech recognition. IEEE Transactions on Multimedia, 2020, 12(99):1-10. </unstructured_citation></citation><citation key="ref5"><doi>10.1016/j.fuel.2020.117431</doi><unstructured_citation>Newgord C, Tandon S, Heidari Z. Simultaneous assessment of wettability and water saturation using 2D NMR measurements. Fuel, 2020, 270(11):117-131. </unstructured_citation></citation><citation key="ref6"><doi>10.1016/j.ssci.2020.104758</doi><unstructured_citation>Goerlandt F. Maritime autonomous surface ships from a risk governance perspective: Interpretation and implications. Safety Science, 2020, 128(6):104758. </unstructured_citation></citation><citation key="ref7"><doi>10.1016/j.clinimag.2019.05.009</doi><unstructured_citation>Mahalingam S, Bhalla N M, Mezrich J L. Curbside consults: Practices, pitfalls and legal issues. Clinical Imaging, 2019, 57(5):83-86. </unstructured_citation></citation><citation key="ref8"><doi>10.1109/access.2019.2918147</doi><unstructured_citation>Shi Y Y, Bai J, Xue P Y, et al. Fusion feature extraction based on auditory and energy for noise-robust speech recognition. IEEE Access, 2019, 7(10):81911-81922. </unstructured_citation></citation><citation key="ref9"><doi>10.1121/1.5100898</doi><unstructured_citation>Viswanathan N, Kokkinakis K. Listening benefits in speech-in-speech recognition are altered under reverberant conditions. The Journal of the Acoustical Society of America, 2019, 145(5):348-353. </unstructured_citation></citation><citation key="ref10"><doi>10.1109/tc.2019.2937075</doi><unstructured_citation>Yazdani R, Arnau J M, Gonzalez A. A low-power, high-performance speech recognition accelerator. IEEE Transactions on Computers, 2019, 68(12):1817-1831. </unstructured_citation></citation><citation key="ref11"><doi>10.1109/lsp.2018.2880285</doi><unstructured_citation>Kim G, Lee H, Kim B K, et al. Unpaired speech enhancement by acoustic and adversarial supervision for speech recognition. IEEE Signal Processing Letters, 2019, 26(1):159-163. </unstructured_citation></citation><citation key="ref12"><doi>10.1016/j.engappai.2021.104189</doi><unstructured_citation>Montenegro C, Santana V, Lozano J A. Analysis of the sensitivity of the End-Of-Turn detection task to errors generated by the automatic speech recognition process. Engineering Applications of Artificial Intelligence, 2021, 100(1):104-109. </unstructured_citation></citation><citation key="ref13"><doi>10.1016/j.specom.2020.01.001</doi><unstructured_citation>Sun R H, Chol R J. Subspace Gaussian mixture based language modeling for large vocabulary continuous speech recognition. Speech Communication, 2020, 117(10):21-27. </unstructured_citation></citation><citation key="ref14"><doi>10.1016/j.specom.2018.11.006</doi><unstructured_citation>Martinez A C, Gerlach L, Payá-Vayá G, et al. DNN-based performance measures for predicting error rates in automatic speech recognition and optimizing hearing aid parameters. Speech Communication, 2019, 106(6):44-56. </unstructured_citation></citation><citation key="ref15"><doi>10.1007/s10772-019-09637-2</doi><unstructured_citation>Ri H C. A usage of the syllable unit based on morphological statistics in Korean large vocabulary continuous speech recognition system. International Journal of Speech Technology, 2019, 22(4):971-977. </unstructured_citation></citation><citation key="ref16"><doi>10.1109/msp.2020.2969859</doi><unstructured_citation>Cui X, Zhang W, Finkler U, et al. Distributed training of deep neural network acoustic models for automatic speech recognition: A comparison of current training strategies. IEEE Signal Processing Magazine, 2020, 37(3):39-49. </unstructured_citation></citation><citation key="ref17"><doi>10.1016/j.ins.2020.09.047</doi><unstructured_citation>Li D, Zhou Y, Wang Z, et al. Exploiting the potentialities of features for speech emotion recognition. Information Sciences, 2021, 548(6):328-343. </unstructured_citation></citation><citation key="ref18"><doi>10.1016/j.heares.2021.108217</doi><unstructured_citation>Hülsmeier D, Schdler M R, Kollmeier B. DARF: A data-reduced FADE version for simulations of speech recognition thresholds with real hearing aids. Hearing Research, 2021, 404(2):108-117. </unstructured_citation></citation><citation key="ref19"><doi>10.1007/s10772-020-09690-2</doi><unstructured_citation>Jermsittiparsert K, Abdurrahman A, Siriattakul P, et al. Pattern recognition and features selection for speech emotion recognition model using deep learning. International Journal of Speech Technology, 2020, 23(4):1-8. </unstructured_citation></citation><citation key="ref20"><doi>10.1109/tce.2020.2986003</doi><unstructured_citation>Kawase T, Okamoto M, Fukutomi T, et al. Speech enhancement parameter adjustment to maximize accuracy of automatic speech recognition. IEEE Transactions on Consumer Electronics, 2020, 12(99):1-12.</unstructured_citation></citation></citation_list></journal_article></journal></body></doi_batch>