Zaɓi Harshe

Fassarar Injin Mai Nau'i Daban-Daban Tare da Koyon Ƙarfafawa: Sabuwar Hanyar A2C

Nazarin takarda da ke gabatar da sabon tsarin Koyon Ƙarfafawa na Mai Ba da Shawara da Mai Zargi (A2C) don fassarar inji mai nau'i daban-daban, wanda ya haɗa bayanan gani da na rubutu.
translation-service.org | PDF Size: 0.8 MB
Kima: 4.5/5
Kimarku
Kun riga kun ƙididdige wannan takarda
Murfin Takardar PDF - Fassarar Injin Mai Nau'i Daban-Daban Tare da Koyon Ƙarfafawa: Sabuwar Hanyar A2C

Tsarin Abubuwan Ciki

1. Gabatarwa

Fassarar Injin (MT) a al'ada ta dogara ne kawai akan bayanan rubutu. Wannan takarda tana binciken Fassarar Injin Mai Nau'i Daban-Daban (MMT), wanda ke haɗa ƙarin nau'ikan bayanai kamar hotuna don inganta ingancin fassara. Babban ƙalubalen da aka magance shi ne bambanci tsakanin manufar horarwa (ƙididdiga mafi yuwuwar) da ma'aunin ƙima na ƙarshe (misali, BLEU), tare da matsalar nuna son kai a cikin samar da jerin bayanai.

Marubutan sun gabatar da sabon mafita ta amfani da Koyon Ƙarfafawa (RL), musamman algorithm na Mai Ba da Shawara da Mai Zargi (A2C), don daidaita ma'aunin ingancin fassara kai tsaye. An yi amfani da tsarin a kan aikin fassara mai nau'i daban-daban na WMT18 ta amfani da bayanan Multi30K da Flickr30K.

2. Ayyukan Da Suka Gabata

Takardar ta sanya kanta a cikin fagage biyu masu haɗuwa: Fassarar Injin ta Jijiya (NMT) da Koyon Ƙarfafawa don ayyukan jerin bayanai. Tana nuni ga aikin NMT na asali na Jean et al. da tsarin Bayanin Hoton Jijiya (NIC) na Vinyals et al. Don RL a cikin tsinkayar jerin bayanai, ta ambaci aikin Ranzato et al. ta amfani da REINFORCE. Babban abin da ya bambanta shi ne aikace-aikacen A2C musamman ga tsarin fassara mai nau'i daban-daban, inda manufar dole ne ta yi la'akari da mahallin gani da na rubutu.

3. Hanyoyin Bincike

3.1. Tsarin Tsarin

Tsarin da aka gabatar tsarin ne mai maiɓatarwa biyu, mai ɓatarwa guda ɗaya. CNN mai tushen ResNet yana ɓoye siffofin hoto, yayin da RNN mai tafiya biyu (mai yiwuwa LSTM/GRU) ke ɓoye jimlar magana ta tushe. Waɗannan wakilcin nau'i daban-daban an haɗa su (misali, ta hanyar haɗawa ko hankali) kuma an ciyar da su cikin mai ɓatarwa na RNN, wanda ke aiki azaman Mai Aiki a cikin tsarin A2C, yana samar da fassarar manufa ta hanyar alamar alama.

3.2. Tsarin Koyon Ƙarfafawa

An tsara tsarin fassara azaman Tsarin Yankin Shawara na Markov (MDP).

Cibiyar sadarwar Mai Zargi ($V_\phi(s_t)$) tana ƙididdige ƙimar jiha, tana taimakawa rage bambancin sabuntawar manufa ta amfani da Fa'ida $A(s_t, a_t) = Q(s_t, a_t) - V(s_t)$.

3.3. Tsarin Horarwa

Horarwa ta ƙunshi haɗa horon da aka riga aka yi (MLE) don kwanciyar hankali tare da daidaitawa na RL. Sabuntawar gradient na manufa tare da fa'ida ita ce: $\nabla_\theta J(\theta) \approx \mathbb{E}[\nabla_\theta \log \pi_\theta(a_t|s_t) A(s_t, a_t)]$. An sabunta Mai Zargi don rage kuskuren bambancin lokaci.

4. Gwaje-gwaje & Sakamako

4.1. Bayanan Gwaji

Multi30K: Ya ƙunshi hotuna 30,000, kowannensu yana da bayanin Turanci da fassarorin Jamusanci. Flickr30K Entities: Ya faɗaɗa Flickr30K tare da bayanan matakin jumla, an yi amfani da shi a nan don aikin daidaitawa mai nau'i daban-daban mafi ƙanƙanta.

4.2. Ma'aunin Ƙima

Ma'auni na farko: BLEU (Ɗalibin Ƙimar Harsuna Biyu). Haka kuma an ruwaito: METEOR da CIDEr don ƙimar ingancin bayanin inda ya dace.

4.3. Nazarin Sakamako

Takardar ta ruwaito cewa tsarin MMT na tushen A2C ya fi na asali na MLE. Manyan binciken sun haɗa da:

Tebur na Sakamako na Hasashe (Bisa Bayanin Takarda):

TsarinBayanan GwajiMakin BLEUMETEOR
MLE na Asali (Rubutu-Kawai)Multi30K En-De32.555.1
MLE na Asali (Mai Nau'i Daban-Daban)Multi30K En-De34.156.3
MMT na A2C da aka GabatarMulti30K En-De35.857.6

5. Tattaunawa

5.1. Ƙarfafawa & Iyakoki

Ƙarfafawa:

Iyakoki & Kurakurai:

5.2. Hanyoyin Gaba

Takardar ta ba da shawarar bincika ƙarin ayyukan lada masu sarƙaƙƙiya (misali, haɗa BLEU tare da kamancen ma'ana), amfani da tsarin ga wasu ayyukan seq2seq masu nau'i daban-daban (misali, bayanin bidiyo), da bincika ƙarin algorithm na RL masu inganci kamar PPO.

6. Nazari na Asali & Fahimtar Ƙwararru

Fahimta ta Asali: Wannan takarda ba game da ƙara hotuna kawai a cikin fassara ba ce; juyawa ce daga kwaikwayon bayanai (MLE) zuwa biyan manufa kai tsaye (RL). Marubutan sun gano daidai rashin daidaito na asali a cikin horon NMT na yau da kullun. Amfani da A2C su zabi ne mai ma'ana—mafi kwanciyar hankali fiye da gradients na manufa kawai (REINFORCE) amma ba shi da sarƙaƙƙiya kamar cikakken PPO a lokacin, yana mai da shi mataki na farko mai yiwuwa ga sabon yanki na aikace-aikace.

Kwararar Hankali & Matsayin Dabarun: Hankali yana da inganci: 1) MLE yana da rashin daidaito da nuna son kai, 2) RL yana magance wannan ta hanyar amfani da ma'aunin ƙima azaman lada, 3) Nau'i daban-daban yana ƙara mahallin warware shakku mai mahimmanci, 4) Don haka, RL+Nau'i Daban-Daban yakamata ya haifar da sakamako mafi girma. Wannan yana sanya aikin a mahadar manyan batutuwa uku (NMT, RL, Harshe-Hankali), wani mataki mai hikima don tasiri. Duk da haka, raunin takardar, gama gari a cikin aikin RL-na-farko don NLP, shine rashin nuna jahannama na injiniyanci na horon RL—bambanci, siffanta lada, da kuma hankalin hyperparameter—wanda sau da yawa yana sa sake samarwa ya zama mafarki, kamar yadda aka lura a cikin binciken daga wurare kamar Google Brain da FAIR.

Ƙarfafawa & Kurakurai: Babban ƙarfi shi ne bayyana ra'ayi da tabbatar da ra'ayi akan bayanan gwaji na yau da kullun. Kurakuran suna cikin cikakkun bayanai da aka bar don aikin gaba: lada mara kyau na BLEU kayan aiki ne mara kyau. Bincike daga Microsoft Research da AllenAI ya nuna cewa lada mai yawa, na tsaka-tsaki (misali, don daidaiton nahawu) ko lada na adawa sau da yawa suna da mahimmanci don samar da inganci mai daidaito. Hanyar haɗaɗɗen nau'i daban-daban kuma mai yiwuwa mai sauƙi (haɗawa da farko); ƙarin hanyoyin da suka fi kuzari kamar tsayayyen hankali (wanda aka yi wahayi daga tsarin kamar ViLBERT) zai zama juyin halitta da ake buƙata.

Fahimta Mai Aiki: Ga masu aiki, wannan takarda alama ce da ke nuna cewa horon da ya dace da manufa shine makoma na AI mai samarwa, ba kawai don fassara ba. Abin da za a iya aiwatarwa shi ne fara ƙirƙirar ayyukan asara da tsarin horarwa waɗanda ke nuna ma'aunin ƙima na gaskiyarku, ko da yana nufin fita daga MLE mai daɗi. Ga masu bincike, mataki na gaba yana bayyana a sarari: tsarin gauraye. Yi horo da MLE don manufa mai kyau na farko, sannan a daidaita tare da RL+lada na ma'auni, kuma watakila a haɗa da wasu masu nuna bambanci irin na GAN don kwarjini, kamar yadda aka gani a cikin ingantattun tsarin samar da rubutu. Makoma tana cikin daidaitawar manufa da yawa, haɗa kwanciyar hankali na MLE tare da manufa kai tsaye na RL da kaifin adawa na GANs.

7. Cikakkun Bayanai na Fasaha

Mahimman Tsarin Lissafi:

Babban sabuntawar RL yana amfani da ka'idar gradient na manufa tare da tushen fa'ida:

$\nabla_\theta J(\theta) = \mathbb{E}_{\pi_\theta}[\nabla_\theta \log \pi_\theta(a|s) \, A^{\pi_\theta}(s,a)]$

inda $A^{\pi_\theta}(s,a) = Q(s,a) - V(s)$ shine aikin fa'ida. A cikin A2C, cibiyar sadarwar Mai Zargi $V_\phi(s)$ tana koyon kusantar aikin ƙimar jiha, kuma ana ƙididdige fa'ida kamar haka:

$A(s_t, a_t) = r_t + \gamma V_\phi(s_{t+1}) - V_\phi(s_t)$ (don $t < T$), tare da $r_T$ kasancewar makin BLEU na ƙarshe.

Ayyukan asara sune:

Asarar Mai Aiki (Manufa): $L_{actor} = -\sum_t \log \pi_\theta(a_t|s_t) A(s_t, a_t)$

Asarar Mai Zargi (Ƙima): $L_{critic} = \sum_t (r_t + \gamma V_\phi(s_{t+1}) - V_\phi(s_t))^2$

8. Misalin Tsarin Nazari

Nazarin Shari'a: Fassara "Yana kamun kifi a bakin banki."

Yanayi: Tsarin NMT na rubutu-kawai zai iya fassara "banki" zuwa ma'anar cibiyar kuɗi mafi yawan lokuta ("Bank" a Jamusanci).

Tsarin Tsarin da aka Gabatar:

  1. Sarrafa Shigarwa:
    • Mai Ƙididdigar Rubutu: Yana sarrafa "Yana kamun kifi a bakin banki." Kalmar "banki" tana da babban shakku.
    • Mai Ƙididdigar Hoton (ResNet): Yana sarrafa hoton da ke tare, yana ciro siffofi da ke nuna kogi, ruwa, ciyayi, da mutum mai sanda.
  2. Haɗaɗɗen Nau'i Daban-Daban: Wakilcin da aka haɗa yana da ƙarfi yana auna siffofin gani masu alaƙa da "kogi" fiye da "gini na kuɗi."
  3. Ƙididdigar Ƙididdiga Mai Jagora (Mai Aiki): Mai ɓatarwa, a matakin don samar da kalmar "banki," yana da manufa $\pi_\theta(a|s)$ wacce mahallin gani ya rinjaye. Rarraba yuwuwar akan ƙamus na Jamusanci ya koma sama ga "Ufer" (bakin kogi) fiye da "Bank".
  4. Ƙididdigar Lada (Mai Zargi): Bayan samar da cikakken jerin "Er angelt am Ufer," tsarin yana karɓar lada (misali, makin BLEU) ta hanyar kwatanta shi da fassarar nassoshi na ɗan adam. Warware shakku daidai yana haifar da lada mafi girma, yana ƙarfafa shawarar manufa don halartar hoton a wannan matakin.

Wannan misalin yana nuna yadda tsarin ke amfani da mahallin gani don warware shakku na ƙamus, tare da madauki na RL yana tabbatar da cewa irin waɗannan warwarewar daidai ana ba da lada kai tsaye kuma ana koya su.

9. Ayyuka na Gaba & Hangen Nesa

Tsarin da aka gabatar a nan yana da tasiri mai nisa fiye da fassarar da hoto ke jagoranta:

Babban yanayin yana motsawa daga tsarin da ba su da ƙarfi, masu dogaro da yuwuwar zuwa wakilai masu aiki, masu jagorantar manufa waɗanda za su iya amfani da hanyoyin bayanai da yawa don cimma manufofin da aka fayyace. Wannan takarda mataki ne na farko amma mai mahimmanci a kan wannan hanya.

10. Nassoshi

  1. Jean, S., Cho, K., Memisevic, R., & Bengio, Y. (2015). On using very large target vocabulary for neural machine translation. ACL.
  2. Bengio, S., Vinyals, O., Jaitly, N., & Shazeer, N. (2015). Scheduled sampling for sequence prediction with recurrent neural networks. NeurIPS.
  3. Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2015). Show and tell: A neural image caption generator. CVPR.
  4. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., ... & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. ICML.
  5. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. CVPR.
  6. Ranzato, M., Chopra, S., Auli, M., & Zaremba, W. (2016). Sequence level training with recurrent neural networks. ICLR.
  7. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
  8. Lu, J., Batra, D., Parikh, D., & Lee, S. (2019). ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. NeurIPS.
  9. Google Brain & FAIR. (2020). Challenges in Reinforcement Learning for Text Generation (Survey).
  10. Microsoft Research. (2021). Dense Reward Engineering for Language Generation.