About

Welcome to ROCLING 2022!

Important Dates

  • Paper Submission Due: September 3 September 10 (Sat), 2022 (final)
  • Notification of acceptance: September 30 (Fri), 2022
  • Camera-ready due: October 7 (Fri), 2022
  • Early Registration ends: October 14 (Fri), 2022
  • Late Registration ends: November 4 (Fri), 2022
  • On-Site Registration: November 21 - 22 (Mon - Tue), 2022
  • All deadlines are 11.59 pm UTC-12h (anywhere on earth)

Welcome to ROCLING 2022!

ROCLING 2022 is the 34th annual Conference on Computational Linguistics and Speech Processing in Taiwan sponsored by the Association for Computational Linguistics and Chinese Language Processing (ACLCLP). The conference will be held in Taipei Medical University, Daan Campus, Taipei city, Taiwan during November 21-22, 2022.

ROCLING 2022 will provide an international forum for researchers and industry practitioners to share their new ideas, original research results and practical development experiences from all language and speech research areas, including computational linguistics, information understanding, and signal processing. ROCLING 2022 will feature oral papers, posters, tutorials, special sessions and shared tasks.

The conference on Computational Linguistics and Speech Processing (ROCLING) was initiated in 1988 by the Association for Computational Linguistics and Chinese Language Processing (ACLCLP) with the major goal to provide a platform for researchers and professionals from around the world to share their experiences related to natural language processing and speech processing. Following are a list of past ROCLING conferences.

Programs

Information for Presenters:

To help you prepare your presentation, here's the important information for presenters.
Download Slide Template

For Oral Presentations:

The presentations can be in either English or Chinese. Each presentation will have 15 minutes to present, followed by 4 minutes of questions and answers, and 1 minute for speaker change. Under irresistible circumstances (e.g., international speakers cannot present live) can a presentation be pre-recorded and played during the conference. Presenters should introduce themselves to the session chairs before the start of their oral session. Each room will be equipped with:

● a laptop computer (Windows system), which can load PPT and PDF,
● a projector,
● a shared Internet connection,
● an audio system.

The display connectors for the screen are both HDMI and VGA. Presenters that would like to use their laptop for their presentation must bring their own adapter to connect to the HDMI/VGA cable and any audio connectors if they have a non-standard audio-out port. Prior to the session, presenters should inform the session chair and test that their computer and adapter works with the projector in the room. Wireless internet connection will be available in the presentation rooms.

For Poster Presentations:

Posters are in A1 size (59.4 cm wide x 84.1 cm high, or 23.4 inches x 33.1 inches). Presenters are advised to mount their posters before the start of the session and dismount it after the end of the session. Materials to fix the posters will be available on site.

Pre-recorded Video Instructions:

The idea behind the pre-recorded video is to provide attendees with a way to gain some insight into your contributions that is engaging. Pre-recorded videos will be released at designated venues at ROCLING 2022. We are aware that guidelines will be helpful to ensure a uniformly excellent experience for all. With that in mind we would like to establish the following minimum expectations:

Duration: At least 5 minutes and at most 15 minutes. Within that interval, choose a duration that you feel will best engage your audience. These include having a video of the presenter in the corner of the slides.
File size: 200MB max
Video file format: mp4
Dimensions: Minimum height 720 pixels, aspect ratio: 16:9

Please note that final specifications will be checked at the time of submission, non-compliant files may be requested to be re-recorded, and a download link of the recorded video be provided for the conference to download by November 18th.

KEYNOTE SPEAKERS

More details to be announced.

Prof. Makoto P. Kato

Matching Texts with Data for Evidence-based Information Retrieval

Speaker: Prof. Makoto P. Kato

  • Professor, University of Tsukuba, Japan
  • Time: 09:00 ~ 10:00, November 21, 2022
  • Session Chair: Min-Yuh Day

Biography

Makoto P. Kato received his Ph.D. degree in Graduate School of Informatics from Kyoto University, Sakyo Ward, Yoshidahonmachi, in 2012. Currently, he is an associate professor of Faculty of Library, Information and Media Science, University of Tsukuba, Japan. In 2008, he was awarded 'WISE 2008 Kambayashi Best Paper Award' through the article 'Can Social Tagging Improve Web Image Search?' with other researchers. In 2010, he served as a JSPS Research Fellow in Japan Society for the Promotion of Science. During the period 2010 to 2012, he also served in Microsoft Research Asia Internship (under supervision by Dr. Tetsuya Sakai in WIT group), Microsoft Research Asia Internship (under supervision by Dr. Tetsuya Sakai in WSM group), and Microsoft Research Internship (under supervision by Dr. Susan Dumais in CLUES group). From 2012, he worked as an assistant professor in Graduate School of Informatics, Kyoto University, Japan. His research and teaching career began, and he worked as an associate professor from 2019 in Graduate School of Informatics, Kyoto University, Japan. His research interests include Information Retrieval, Web Mining, and Machine Learning, while he is an associate professor in Knowledge Acquisition System Laboratory (Kato Laboratory), University of Tsukuba, Japan.

Abstract

We are now facing the problem of misinformation and disinformation on the Web, and search engines are struggling to retrieve reliable information from a vast amount of Web data. One of the possible solutions to this problem is to find reliable evidences supporting a claim on the Web. But what are “reliable evidences”? They can include authorities' opinions, scientific papers, or wisdom of crowds. However, they are also sometimes subjective as they are outcomes produced by people.

This talk discusses some approaches incorporating another type of evidences that are very objective --- numerical data --- for reliable information access.

(1) Entity Retrieval based on Numerical Attributes
Entity retrieval is a task of retrieving entities for a given text query and usually based on text matching between the query and entity description. Our recent work attempted to match the query and numerical attributes of entities and produce explainable rankings. For example, our approach ranks cameras based on their numerical attributes such as resolution, f-number, and weight, in response to queries such as “camera for astrophotography” and “camera for hiking”.

(2) Data Search
When people encounter suspicious claims on the Web, data can be reliable sources for the fact checking. NTCIR Data Search is an evaluation campaign that aims to foster data search research by developing an evaluation infrastructure and organizing shared tasks for data search. The first test collection for data search and some findings are introduced in this talk.

(3) Data Summarization
While the data search project attempts to develop a data search system for end users and help them make decisions based on data, it is still difficult for users to quickly interpret data. Therefore, data summarization techniques are also necessary to enable users to incorporate data in their information seeking process. Recent automatic visualization and text-based data summarization techniques are presented in this talk.

Prof. Junichi Yamagishi

Title: Speech Synthesis Research 2.0

Speaker: Prof. Junichi Yamagishi

  • Professor, National Institute of Informatics, Japan
  • Time: 09:00 ~ 10:00, November 22, 2022
  • Session Chair: Yu Tsao

Biography

Junichi Yamagishi received the Ph.D. degree from Tokyo Institute of Technology in 2006 for a thesis that pioneered speaker-adaptive speech synthesis. He is currently a Professor with the National Institute of Informatics, Tokyo, Japan, and also a Senior Research Fellow with the Centre for Speech Technology Research, University of Edinburgh, Edinburgh, U.K. Since 2006, he has authored and co-authored more than 250 refereed papers in international journals and conferences. He was an area coordinator at Interspeech 2012. He was one of organizers for special sessions on “Spoofing and Countermeasures for Automatic Speaker Verification” at Interspeech 2013, “ASVspoof evaluation” at Interspeech 2015, “Voice conversion challenge 2016” at Interspeech 2016, “2nd ASVspoof evaluation” at Interspeech 2017, and “Voice conversion challenge 2018” at Speaker Odyssey 2018. He is currently an organizing committee for ASVspoof 2019, an organizing committee for ISCA the 10th ISCA Speech Synthesis Workshop 2019, a technical program committee for IEEE ASRU 2019, and an award committee for ISCA Speaker Odyssey 2020. He was a member of IEEE Speech and Language Technical Committee. He was also an Associate Editor of the IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING and a Lead Guest Editor for the IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING special issue on Spoofing and Countermeasures for Automatic Speaker Verification. He is currently a guest editor for Computer Speech and Language special issue on speaker and language characterization and recognition: voice modeling, conversion, synthesis and ethical aspects. He also serves as a chairperson of ISCA SynSIG currently. He was the recipient of the Tejima Prize as the best Ph.D. thesis of Tokyo Institute of Technology in 2007. He received the Itakura Prize from the Acoustic Society of Japan in 2010, the Kiyasu Special Industrial Achievement Award from the Information Processing Society of Japan in 2013, the Young Scientists’ Prize from the Minister of Education, Science and Technology in 2014, the JSPS Prize from Japan Society for the Promotion of Science in 2016, and Docomo mobile science award from Mobile communication fund in 2018.

Abstract

The Yamagishi Laboratory at the National Institute of Informatics researches text-to-speech (TTS) and voice conversion (VC) technologies. Having achieved TTS and VC methods that reproduce human-level naturalness and speaker similarity, we introduce three challenging projects we are currently working on as the next phase of our research.

1) Rakugo speech synthesis [1]
As an example of a challenging application of speech synthesis technology, especially an example of an entertainment application, we have concentrated on rakugo, a traditional Japanese performing art. We have been working on learning and reproducing the skills of a professional comic storyteller using speech synthesis. This project aims to achieve an "AI storyteller" that entertains listeners, entirely different from the conventional speech synthesis task, whose primary purpose is to convey information or answer questions. The main story of rakugo comprises conversations between characters, and various characters appear in the story. These characters are performed by a single rakugo storyteller, who changes their voice appropriately so the listeners can understand and entertain them. To reproduce such characteristics of rakugo voice by machine learning, performance data of rakugo and advanced modeling techniques are required. Therefore, we constructed a corpus of rakugo speech without any noise or audience sounds with the cooperation of an Edo-style rakugo performer and modeled this data using deep learning. In addition, we benchmarked our system by comparing the generated Rakugo speech with performances by Rakugo storytellers of different ranks (“Zenza/前座," “Futatsume/二つ目," and “Shinuchi/真打") through subjective evaluation.

(2) Speech intelligibility enhancement [2]
In remote communication, such as online conferencing, there are environmental background noises on both speaker and listener sides. Speech intelligibility enhancement is a technique to manipulate speech signals so as not to be masked by the noise on the listener's side while maintaining the volume. This is not a simple conversion task since "correct teacher data" does not exist. For this reason, deep learning has not been used in the past, and there has been no significant technological progress. However, various possible practical applications exist, such as intelligibility enhancement of station announcements. Therefore, we proposed a network structure called "iMetricGAN" and its learning method, in which complex and non-differentiable speech intelligibility and quality indexes are treated as output values of a discriminator in an adversarial generative network, the discriminator approximates the indexes and based on the approximated indexes, a generator is used to transform an input speech signal into an enhanced, easy-to-hear speech signal automatically. Subject experiments confirmed that this transformation significantly improves keyword recognition in noisy environments.

(3) Speaker Anonymization [3, 4]
Now that it is becoming easier to build speech synthesis systems that digitally clone someone’s voice using ‘found' data on social media, there is a need to mask the speaker information in speech and other sensitive attributes that are appropriate to be protected. This is a new research topic; it has not yet been clearly defined how speaker anonymization can be achieved. We proposed a speaker anonymization method that combines speech synthesis and speaker recognition technologies. Our approach decomposes speech into three pieces of information: prosody, phoneme information, and a speaker embedding vector called X-vector, which is standardly used in speaker recognition and anonymizes the individuality of a speaker by averaging only the X-vector with K speakers. A neural vocoder is used to re-synthesize high-quality speech waveform. We also introduce a speech database and evaluation metrics to compare speaker anonymization methods.

Reference
[1] Shuhei Kato, Yusuke Yasuda, Xin Wang, Erica Cooper, Shinji Takaki, Junichi Yamagishi "Modeling of Rakugo Speech and Its Limitations: Toward Speech Synthesis That Entertains Audiences,” IEEE Access, vol.8, pp.138149-138161, July 2020
[2] Haoyu Li, Junichi Yamagishi, “Multi-Metric Optimization Using Generative Adversarial Networks for Near-End Speech Intelligibility Enhancement,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.29, pp.3000-3011, Sept 2021
[3] Fuming Fang, Xin Wang, Junichi Yamagishi, Isao Echizen, Massimiliano Todisco, Nicholas Evans, Jean-Francois Bonastre, “Speaker Anonymization Using X-vector and Neural Waveform Models,” 10th ISCA Speech Synthesis Workshop (SSW10), Sept 2019
[4] Xiaoxiao Miao, Xin Wang, Erica Cooper, Junichi Yamagishi, Natalia Tomashenko, "Language-Independent Speaker Anonymization Approach using Self-Supervised Pre-Trained Models,” Odyssey 2022: The Speaker and Language Recognition Workshop, June 2022

DEMO SESSION: AI Tutorial I & II

AI Tutorial I

AICup 秋季賽自然語言理解的解釋性資訊標記比賽說明會

Time: Monday, November 21, 2022, 10:20-12:20
Instructor: Hen-Hsen Huang

Abstract

議論探勘(argument mining)是近期廣受矚目的自然語言處理任務。該任務試圖從文句中找出人們的主張(claim),以及支持或反對這些主張的原因。然而,模型在提供分類預測之餘,究竟是如何得到該預測之結果,其中的解釋性要素則仍然未有充份的研究。有鑑於此,本比賽以議論探勘為目標,希望能讓模型在預測文句之間支持或反駁的關係之外,找出文句之中關鍵性的片段,作為預測的佐證資訊。這類資訊可以讓研究人員更了解模型內部的行為、促進自然語言處理的研究,同時也可望將來在終端應用時,提供出模型的判斷依據,讓人類評估模型該次判斷的可靠程度。

AI Tutorial II

Taiwanese Across Taiwan(TAT)語料庫與其應用

Time: Fuesday, November 22, 2022, 10:20 - 12:20
Instructor: Yuan-Fu Liao

Abstract

Taiwanese Across Taiwan(TAT)語料庫與其應用歷經三年,Taiwanese Across Taiwan(TAT)台文語音語料庫,針對台語語音辨認目的,已完成在台灣各地,招募640人,累積收錄312小時語料。而針對台語語音合成目的,也已完成招募兩男兩女,每人錄製10小時,累積收錄41小時語料。此台文語音語料庫的第一階段產出,已由計算語言學學會發行,而第二階段產出,將在近期由教育部發行。與此同時,我們也利用Kaldi與ESPnet等工具,開發出(1)台語語音辨認、(2)台語語音合成、(3)台文自然語言頗析器等工具,並(4)實現台語語音轉換等應用。
因此在此Tutorial中,我們將介紹TAT語料庫,說明利用TAT語料庫就可以做出來的成果,並釋放出這些台語語音AI工具與預訓練模型,包括(1)給一般人使用的網頁介面,(2)給應用程式開發者使用的預訓練模型與colab程式範例,(3)給進階開發者參考的Kaldi與ESPnet模型訓練腳本。我們將逐步說明這些台語語音AI工具的製作與使用方式,除推廣TAT語料庫,並希望能拋磚引玉,吸引更多人加速開發更多更好的台語語音AI工具。

Registration

Early Registration

(Before October 14, 2022)
Regular
  • ACLCLP Member: NT$ 4,000
  • ACLCLP Non-Member: NT$ 5,000
Student
  • ACLCLP Member: NT$ 1,500
  • ACLCLP Non-Member: NT$ 2,000
Sponsors
  • Free
Closed

Late Registration

(October 15 ~ November 4, 2022)
Regular
  • ACLCLP Member: NT$ 4,300
  • ACLCLP Non-Member: NT$ 5,300
Student
  • ACLCLP Member: NT$ 1,800
  • ACLCLP Non-Member: NT$ 2,300
Sponsors
  • Free
Closed

On-Site Registration

(November 21 - 22, 2022)
Regular
  • ACLCLP Member: NT$ 4,500
  • ACLCLP Non-Member: NT$ 5,500
Student
  • ACLCLP Member: NT$ 2,000
  • ACLCLP Non-Member: NT$ 2,500
Sponsors
  • Free
Closed

附註說明/Registration Fees

  • 每篇會議論文的發表至少要繳交一筆「一般人士 (Regular)」報名費。
  • 報名費含大會紀念品、午餐、茶點及晚宴,報名費一經繳費後恕不接受退費,會後將郵寄相關資料予報名者。
  • ACLCLP Member 為「中華民國計算語言學學會」之有效會員
  • 本年度尚未繳交年費之舊會員或失效之會員,與會身份/Category請勾選「….(會員+會費)」,勿再重複申請入會
  • 非會員欲同時申請入會者,請先至學會網頁之「會員專區」申請加入會員;報名時「與會身份/Category」請勾選「….(會員+會費)」。 (前往會員專區)
  • 以「學生新會員」及「學生非會員」身份報名者,請於報名時上傳學生身份證明。
  • 贊助單位敬請於 11/4 前完成報名手續。
  • 報名費收據將於會議當日報到時交付。

Registration Details

  • At least one author each paper has to pay a full registration.
  • Registration fee includes: abstract booklet, lunches, coffee breaks, and banquet. Registration fees are non-refundable.
  • International registrants have to pay by credit card only (Visa or MasterCard). Receipt will be provided on-site.
  • A copy of a valid student ID must be uploaded into the system when registering as a student.

報名及繳費期限/Important Dates for Registration

  • Early Registration: 10/14(Fri)以前,報名費應於 10/21(Fri)前繳交。
  • Late Registration: 10/15 (Sat) 至 11/4 (Fri),報名費應於 11/11 (Fri) 前繳交(報名費加收300元),線上刷卡繳費者需於 11/4(Fri) 前完成繳費。
  • On-Site Registration: 11/4(Fri) 線上報名截止,擬參加者,請至大會現場報名(報名費加收500元)。

Important Dates for Registration

  • Early Registration due by October 14. Payment must be received before October 21.
  • Registration between October 15 and November 4. Payment must be received before November 4.
  • The registration site will be closed on November 4. After that, please register on-site.

繳費方式/Methods of Payment

  • 郵政劃撥/Postal——戶名:中華民國計算語言學學會帳號 帳號:19166251——(同一單位多位報名者可合併劃撥,請於劃撥通訊欄中註明「ROCLING及註冊編號或報名者姓名」)。
  • 線上刷卡繳費/credit card on-line。

註冊費事宜/For registration inquiries, please contact

  • 聯絡人:何婉如 小姐(中華民國計算語言學學會/ACLCLP)
  • E-mail:aclclp@aclclp.org.tw
  • Phone Number: 02-27881638

Special Session

客語語言資源之建置與應用

Special Session: Construction and Application of Hakka Language Resources

廖元甫
國立陽明交通大學
智能研究所 教授
yfliao@nycu.edu.tw
講題:客語語音語料庫建置計畫介紹與初步成果 (Introduction and preliminary results of Hakka Across Taiwan Project)
賴惠玲
國立政治大學
英國語文學系特聘教授
hllai@nccu.edu.tw
講題:「臺灣客語語料庫」建置、現況與展望 (Taiwan Hakka Corpus: Construction, Current Development and Prospect)
曾淑娟
中央研究院
語言學研究所研究員兼副所長
tsengsc@gate.sinica.edu.tw
講題:口語對話與語音習得語料庫研究 (Corpus-based Research on Conversation Analysis and Speech Acquisition)

摘要

依照語言學及語言科技的定義而言,在以語言運算為主的資料庫或應用平台中建立、維護與評測特定研究目的的語言材料,被稱為語言資源,其可大致分為語言語料與科技工具,前者包括文本、語詞、文法、語言模型或不同類型的語言資料,而後者為語言處理及維護。僅次於臺灣華語和臺灣閩南語,臺灣客語為第三大族群語言,又依客家委員會2017年的調查,符合《客家基本法》對客家人的定義「具有客家血緣或客家淵源,且自我認為為客家人者」,全國客家人口比例約有453.7萬人,占全國人口的19.3%,然而此調查亦顯示客家民眾聽說的能力是逐年下降,而客語的流失率卻是逐年上升。在拯救及保存瀕危語言的使命下,創建客語相關的語料庫或語言資源便是當務之急。本座談會匯集三位主要從事客語語料庫及相關研究的專家學者,由賴惠玲教授介紹及分享「臺灣客語語料庫」之建置過程與經驗,此語料庫為目前較具規模的大型客語語料庫,已克服客語建構中所面臨的諸多挑戰,亦開發客語的檢索及斷詞系統,迄今收錄客語口語語料(總共超過40萬字)與客語書面語料(總共超過600萬字)。其次,由廖元甫教授介紹及分享「臺灣客語語音語料庫」之蒐集過程與建置經驗,以客語AI語音辨識為目標,並針對客語四縣腔和海陸腔,廣泛錄製兩種客語的次方言語音,以及擴展語料的使用型態,期以做成客語語音合成及語言辨識的基礎。最後,由曾淑娟博士介紹及分享台灣華語成人及兒童語音語料庫研究經驗,以及客語對話的言談結構研究。

Abstract

According to the definition of Linguistics and language technology, a language resource is a linguistic material used in the construction, improvement and evaluation of language processing applications or platforms, which are roughly divided into linguistic data, including text, vocabulary, grammar, language models or different types of data, and technology tools, referring to language processing and maintenance. Next to Taiwan Mandarin Chinese and Taiwan Southern Min, Taiwan Hakka is the third largest language, and based on the Hakka Affairs Council's 2017 survey, the proportion of the Hakka population, in light of the definition of the Hakka Basic Law regarding Hakka people as "have Hakka blood or origin, and who believe themselves as Hakka people" is about 4.537 million, accounting for 19.3% of the national population. However, this survey also shows that the Hakka speaking and listening proficiency of Hakka people is declining, while the loss rate of the Hakka language is increasing. Under the mission of saving and preserving endangered languages, the creation of Hakka-related corpora or language resources is a top priority. This symposium brings together three experts and scholars who are mainly engaged in Hakka corpus and related research. Professor Huei-ling Lai introduces and shares the construction process and experience of the "Taiwan Hakka Corpus" (THC). Currently, the THC is a relatively large-scale Hakka corpus in a systematic manner. The THC construction has overcome various challenges derived from Hakka's idiosyncratic performance, and it has also developed a retrieval and word segmentation system for Hakka. So far, it has collected multiple Hakka spoken data (in total over 400,000 words) and Hakka written data (in total over 6 million words). Second, Professor Yuan-Fu Liao introduces and shares the collection process and construction experience of the "Hakka Across Taiwan Corpus", aiming at widely collecting and recording the speech data of the two Hakka sub-dialects, Sixian dialect and Hailu dialect, for constructing the foundation of Hakka speech synthesis and AI speech recognition. Finally, Dr. Shu-Chuan Tseng introduces and shares her research on Taiwan Mandarin speech corpora of adults and children, as well as her recent works on discourse understanding of Hakka conversation.

ROCLING 2022 Shared Task

Chinese Healthcare Named Entity Recognition

Organizers

李龍豪 Lung-Hao Lee
國立中央大學電機工程學系

Department of Electrical Engineering National Central University

lhlee@ee.ncu.edu.tw
陳昭沂 Chao-Yi Chen
國立中央大學電機工程學系

Department of Electrical Engineering National Central University

110581007@cc.ncu.edu.tw
禹良治 Liang-Chih Yu
元智大學資訊管理學系

Department of Information Management Yuan Ze University

lcyu@saturn.yzu.edu.tw
曾元顯 Yuen-Hsien Tseng
國立臺灣師範大學圖書資訊學研究所
Graduate Institute of Library and Information Studies National Taiwan Normal University
samtseng@ntnu.edu.tw

How to participate? Registration here (Due: August 20, 2022)

I. Background

Named Entity Recognition (NER) is a fundamental task in information extraction that locates the mentions of named entities and classifies them (e.g., person, organization and location) in unstructured texts. The NER task has traditionally been solved as a sequence labeling problem, where entity boundaries and category labels are jointly predicted. Various methods have been proposed to tackle this research problem, including Hidden Markov Models (HMM) (Ponomareva et al., 2007), Maximum Entropy Markov Models (MEMM) (Chieu and Ng, 2003) and Conditional Random Field (CRF) (Wei et al., 2015). Recently, neural networks have been shown to achieve impressive results. The current state-of-the-art for English NER has been achieved by using LSTM (Long Short-Term Memory)- CRF based networks (Chiu and Nichols, 2016; Lample et al., 2016; Ma and Hovy, 2016; Liu et al., 2018).

Chinese NER is more difficult to process than English NER. Chinese language is logographic and provides no conventional features like capitalization. In addition, due to a lack of delimiters between characters, Chinese NER is correlated with word segmentation, and named entity boundaries are also word boundaries. However, incorrectly segmented entity boundaries will cause error propagation in NER. For example, in a particular context, a disease entity “思覺失調症” (schizophrenia) may be incorrectly segmented into three words: “思覺” (thinking and feeling), “失調” (disorder) and “症” (disease). Hence, it has been shown that character-based methods outperform word-based approaches for Chinese NER (He and Wang, 2008; Li et al., 2014; Zhang and Yang, 2018).

In the digital era, healthcare information-seeking users usually search and browse web content in click-through trails to obtain healthcare-related information before making a doctor’s appointment for diagnosis and treatment. Web texts are valuable sources to provide healthcare information such as health-related news, digital health magazines and medical question/answer forums. Domain-specific healthcare information includes many proper names, mainly as named entities. For example, “三酸甘油酯” (triglyceride) is a chemical found in the human body; “電腦斷層掃描” (computed tomography; CT) is medical imaging procedure that uses computer-processed combinations of X-ray measurements to produce tomographic images of specific areas of the human body, and “靜脈免疫球蛋白注射” (intravenous immunoglobulin; IVIG) is a kind of treatment for avoiding infections. In summary, Chinese healthcare NER is an important and essential task in natural language processing to automatically identify healthcare entities such as symptoms, chemicals, diseases, and treatments for machine reading and understanding.

II. Task Description

A total of 10 entity types are described and some examples are provided in Table I for Chinese healthcare named entity recognition. In this task, participants are asked to predict the named entity boundaries and categories for each given sentence. We use the common BIO (Beginning, Inside, and Outside) format for NER tasks. The B-prefix before a tag indicates that the character is the beginning of a named entity and I-prefix before a tag indicates that the character is inside a named entity. An O tag indicates that a token belongs to no named entity. Below are the example sentences.

Example 1:
● Input: 修復肌肉與骨骼最重要的便是熱量、蛋白質與鈣質。
● Output: O, O, B-BODY, I-BODY, O, B-BODY, I-BODY, O, O, O, O, O, O, O, O, O, B-CHEM, I-CHEM, I-CHEM, O, B-CHEM, I-CHEM, O

Example 2:
● Input: 如何治療胃食道逆流症?
● Output: O, O, O, O, B-DISE, I-DISE, I-DISE, I-DISE, I-DISE, I-DISE, O


Table 1. Named Entity Types with Descriptions and Examples

Entity Type Description Examples
Body (BODY) The whole physical structure that forms a person or animal including biological cells, organizations, organs and systems. “細胞核” (nucleus), “神經組織” (nerve tissue), “左心房” (left atrium), “脊髓” (spinal cord), “呼吸系統” (respiratory system)
Symptom (SYMP) Any feeling of illness or physical or mental change that is caused by a particular disease. “流鼻水” (rhinorrhea), “咳嗽” (cough), “貧血” (anemia), “失眠” (insomnia), “心悸” (palpitation), “耳鳴” (tinnitus)
Instrument (INST) A tool or other device used for performing a particular medical task such as diagnosis and treatments. “血壓計” (blood pressure meter), “達文西手臂” (DaVinci Robots), “體脂肪計” (body fat monitor), “雷射手術刀” (laser scalpel)
Examination (EXAM) The act of looking at or checking something carefully in order to discover possible diseases. “聽力檢查” (hearing test), “腦電波圖” (electroencephalography; EEG), “核磁共振造影” (magnetic resonance imaging; MRI)
Chemical (CHEM) Any basic chemical element typically found in the human body. “去氧核糖核酸” (deoxyribonucleic acid; DNA), “糖化血色素” (glycated hemoglobin), “膽固醇” (cholesterol), “尿酸” (uric acid)
Disease (DISE) An illness of people or animals caused by infection or a failure of health rather than by an accident. “小兒麻痺症” (poliomyelitis; polio), “帕金森氏症” (Parkinson’s disease), “青光眼” (glaucoma), “肺結核” (tuberculosis)
Drug (DRUG) Any natural or artificially made chemical used as a medicine “阿斯匹靈” (aspirin), “普拿疼” (acetaminophen), “青黴素” (penicillin), “流感疫苗” (influenza vaccination)
Supplement (SUPP) Something added to something else to improve human health. “維他命” (vitamin), “膠原蛋白” (collagen), “益生菌” (probiotics), “葡萄糖胺” (glucosamine), “葉黃素” (lutein)
Treatment (TREAT) A method of behavior used to treat diseases “藥物治療” (pharmacotherapy), “胃切除術” (gastrectomy), “標靶治療” (targeted therapy), “外科手術” (surgery)
Time (TIME) Element of existence measured in minutes, days, years “嬰兒期” (infancy), “幼兒時期” (early childhood), “青春期” (adolescence), “生理期” (on one’s period), “孕期” (pregnancy)

III. Data

● Training Set: Chinese HealthNER Corpus (Lee and Lu, 2021)
It includes 30,692 sentences with a total around 1.5 million characters or 91.7 thousand words. After manual annotation, we have 68,460 named entities across 10 entity types: body, symptom, instrument, examination, chemical, disease, drug, supplement, treatment, and time.

● Test set: at least 3,000 Chinese sentences will be provided for system performance evaluation.

The policy of this shared task is an open test. Participating systems are allowed to use other publicly available data for this shared task, but the use of other data should be specified in the final system description paper.

IV. Evaluation

The performance is evaluated by examining the difference between machine-predicted labels and human-annotated labels. We adopt standard precision, recall, and F1-score, which are the most typical evaluation metrics of NER systems at a character level. If the predicted tag of a character in terms of BIO format was completely identical with the gold standard, that is one of the defined BIO tags, the character in the testing instance was regarded as correctly recognized. Precision is defined as the percentage of named entities found by the NER system that are correct. Recall is the percentage of named entities present in the test set found by the NER system.

V. Important Dates

  • ● Release of training data: April 15, 2022
  • ● Release of test data: August 31, 2022
  • ● Testing results submission due: September 2, 2022
  • ● Release of evaluation results: September 5, 2022
  • ● System description paper due: September 20, 2022
  • ● Notification of Acceptance: September 30, 2022
  • ● Camera-ready deadline: October 7, 2022

References

  • Hai Leong Chieu, and Hwee Tou Ng (2003). Named entity recognition with a maximum approach. In Proceedings of 7th Conference on Natural Language Learning (CoNLL’03), pp. 160–163.
  • Jason P. C. Chiu, and Eric Nichols (2016). Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357–370.
  • Jingzhou He, and Houfeng Wang (2008). Chinese named entity recognition and word segmentation based on character. In Proceedings of the 6th SIGHAN Workshop on Chinese Language Processing (SIGHAN’08), pp. 128–132.
  • Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer (2016). Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT’16), pp. 260–270.
  • Lung-Hao Lee, and Yi Lu (2021). Multiple Embeddings Enhanced Multi-Graph Neural Networks for Chinese Healthcare Named Entity Recognition. IEEE Journal of Biomedical and Health Informatics, 25(7): 2801- 2810.
  • Haibo Li, Masato Hagiwara, Qi Li, and Heng Ji (2014). Comparison of the impact of word segmentation on name tagging for Chinese and Japanese. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC’14), pp. 2532–2536.
  • Liyuan Liu, Jingbo Shang, Xiang Ren, Frank F. Xu, Huan Gui, Jian Peng, and Jiawei Han (2018). Empower sequence labeling with task-aware neural language model. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI’18), pp. 5253–5260.
  • Xuezhe Ma and Eduard Hovy (2016). End-to-end sequence labeling via Bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL’16), pp. 1064–1074.
  • Natalia Ponomareva, Ferran. Pla, Antonio Molina, and Paolo Rosso (2007). Biomedical named entity recognition: A poor knowledge HMM-based approach. In Proceedings of the 12th International Conference on Applications of Natural Language to Information Systems (NLDB’07), pp. 382–387.
  • Chih-Hsuan Wei, Robert Leaman, and Zhiyong Lu (2015). SimConcept: A hybrid approach for simplifying composite named entities in biomedical text. IEEE Journal of Biomedical and Health Informatics, 19(4):1385–1391.
  • Yue Zhang, and Jie Yang (2018). Chinese NER using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL’18), pp. 1554–1564.
  • Organization

    Honorary Chair

    • Chien-Huang Lin
    • Taipei Medical University

    Conference Chairs

    • Yung-Chun Chang
    • Taipei Medical University
    • Yi-Chin Huang
    • National Pingtung University

    Program Chairs

    • Jheng-Long Wu
    • Soochow University
    • Ming-Hsiang Su
    • Soochow University

    Demo Chairs

    • Hen-Hsen Huang
    • Academia Sinica

    Publication Chair

    • Yi-Fen Liu
    • Feng Chia University

    Shared Task Chair

    • Lung-Hao Lee
    • National Central University

    Special Session Chair

    • Chin-Hung Chou
    • National Central University
    • Yuan-Fu Liao
    • Taipei Tech Electronic University

    Organized by

    Taipei Medical University
    National Pingtung University
    The Association for Computational Linguistics and Chinese Language Processing

    Co-organized by



    Sponsored by












    VENUE

    交通資訊

    各位與會者您好,以下是北醫附近的停車資訊,因車位有限且屆時人潮車潮眾多,為不影響您的與會體驗,歡迎搭乘大眾交通工具前來參與。

    開車:

    ● (國道3號)由信義快速道路下來走左側 2 條車道下出口,進入信義路五段直走往基隆路/市政中心方向行進約1.1公里後,左轉基隆路二段,沿基隆路二段直走1公里後,右側即可見臺北醫學大學大安校區。
    ● (環東大道)沿著基隆路的路標走,靠左繼續走基隆路地下道,繼續直行基隆路一段,接續直行基隆路二段 1 公里後,右側即可見臺北醫學大學大安校區。

    停車資訊:

    嘟嘟房北醫大安校區站停車場
    地址:台北市大安區基隆路二段172-1號B3-B6
    計時:40元/時,停車未滿1小時以1小時計,逾1小時以上,未滿半小時以半小時計。
    容量:稀少
    距離:0m

    停車場1:CITY PARKING 城市車旅停車場 (台北通化站停車場)
    地址:台北市大安區通化街171巷1號
    計時:00:00-18:00每半小時50元,當日最高收費350元/18:00-24:00每半小時50元,無最高上限
    容量:約 66 車位
    距離:500m

    停車場2:俥亭停車(基隆路場站)
    地址:台北市大安區基隆路二段168號旁空地
    計時:40元/時,停車未滿1小時以1小時計,逾1小時以上未滿半小時以半小時計費。
    容量:稀少
    距離:140m

    停車場3:景勤二號公園地下停車場
    地址:臺北市信義區信安街49號
    計時:30元/時,停車未滿1小時以1小時計,逾1小時以上未滿半小時以半小時計費。
    容量:約 214 車位
    距離:550m

    停車場4:CITY PARKING 城市車旅(吳興站)
    地址:
    計時:30元/時,停車未滿1小時以1小時計,逾1小時以上未滿半小時以半小時計費。
    容量:約 138 車位
    距離:750m

    搭公車:

    1、1503、207、254、282、284、284直、292、292副、611、650、672、內科通勤專車10、南軟通勤專車中和線、南軟通勤專車雙和線(喬治商職站)

    搭捷運:

    搭乘捷運文湖線至(六張犁站)下車,單一出口循基隆路走往台北市政府方向步行近 300 公尺(約 5 分鐘)可抵統一超商(7-ELEVEN)喬治門市,對面即是臺北醫學大學大安校區。

    晚宴資訊

    珍寶海鮮 台北信義店 JUMBO Seafood

    請攜帶晚宴餐卷前往餐廳,入場時需出示餐卷,謝謝

    地點: 110台北市信義區松高路12號3F(新光三越A8)
    時間: 18:00-20:30

    搭公車:

    多線公車可至『新光三越 台北信義新天地A8』
    若從會場(北醫大 大安校區)出發,請至基隆路對面站牌(喬治商職),搭公車至晚宴會場。
    ● (935):喬治商職 - 市政府(松高),步行220公尺(約3分鐘)
    ● (基隆路幹線,原 650):喬治商職 - 市政府(松智),步行400公尺(約5分鐘)
    ● (284, 611):喬治商職 - 松壽路口,步行450公尺(約6分鐘)

    搭捷運:

    搭乘捷運板南線至(市政府站)下車,三號出口循興雅路往松高路步行近 300 公尺(約 5 分鐘)可抵新光三越台北信義新天地A8

    開車:

    往西南走基隆路二段,經日勝證券後迴轉基隆路二段,右轉信義路五段約600公尺後,左轉進松智路約1公里後,右轉松高路約500公尺,新光三越A8即在右手邊。

    Rocling2022

    Taipei city, Taiwan

    November 21 - 22, 2022

    09:00 AM – 05:00 PM

    343 Available Seats

    Hurryup! few tickets are left

    Free Lunch & Snacks

    Don’t miss it