Deepn-jpeg: jpeg-ga asoslangan rasmlarni siqish uchun qulay bo'lgan chuqur neyron tarmoq


Download 438.56 Kb.
bet4/4
Sana07.09.2020
Hajmi438.56 Kb.
#128826
1   2   3   4
Bog'liq
DeepN maqola 2020

Chastotani taqsimlash.

 5-rasmda ko'rsatilgandek, "kattalashtirishga asoslangan" usul har doim ham MF va HF diapazonlaridagi "joylashuvga asoslangan" usuldan ko'ra aniqroq aniqlikka erishishi mumkin, chunki kvantlash bosqichi oshgani sayin. Bundan tashqari, bizning yechimimiz ikkala MF va HF diapazonlarida aniqlikni kamaytirmasdan, ya'ni 40 v.s. 60 HF diapazonida, bu JPEG ga qaraganda yuqori siqishni tezligini o'zgartirishi mumkin. Bundan tashqari, biz LF diapazonida Qi, j> 5 bo'lsa, DNN noaniqlik pasayishini boshlaymiz, bu statistik jihatdan eng katta DCT koeffitsientlari kvantlash xatolariga eng sezgir ekanligini ko'rsatadi, shuning uchun biz Qmin = 5 ni kvantlash qiymatining pastki chegarasi sifatida belgilaymiz. aniqlikni ta'minlang (5-rasm (a) -ga qarang). Xuddi shunday, 5 (b) va (c) rasmlarning tanqidiy nuqtalariga asoslanib, T1 va T 2 nuqtalarida kvantlash bosqichlarini olishimiz mumkin, shu bilan k1, k2, a va b kabi parametrlarni aniqlaymiz.



LF diapazonida k3 sozlash. MF va HF diapazonidagi parametrlardan farqli o'laroq, LF diapazonidagi k3 ni optimallashtirish trivial emas, chunki u aniqlik va siqishni tezligiga sezilarli ta'sir qiladi. K3 ni pastki chegara Qmin va c ga qarab to'g'ridan-to'g'ri hal qilib bo'lmaydiganligi sababli, turli k3 asosida kompressiya tezligi va aniqlik o'rtasidagi bog'liqlikni o'rganamiz. 6-rasmda ko'rsatilgandek, kichik k3 DNN aniqligini ozgina qurbon qilib, siqishni tezligini oshirishi mumkin. Bizning kuzatuvimizga asoslanib, dastlabki aniqlikni saqlab turganda, siqishni tezligini oshirish uchun k3 = 3 ni tanlaymiz.





5 BAHOLASH

Bizning tajribalarimiz Torch [26] ochiq manbali ta'lim tizimida olib boriladi. "DeepN-JPEG" ramkasi ochiq manbali JPEG ramkasini [27] jiddiy ravishda o'zgartirish orqali amalga oshiriladi, siqishni tezligi va tasnif aniqligini oshirish uchun ImageNet [17] keng ko'lamli ma'lumotlar to'plami qabul qilinadi. o'lchamlarini o'zgartirish yoki oldindan ishlov berish kabi tezlikni talab qilmasdan bizning baholashda ularning asl o'lchovlari ImageNet-ga bag'ishlangan "DeepN-JPEG" optimallashtirilgan parametrlari quyidagicha: a = 255, b = 80, c = 240, T1 = 20, T2 = 60, k1 = 9.75, k2 = 1, k3 = 3. DNNning to'rtta zamonaviy modeli bizning tajribamizda baholanadi - AlexNet [11], VGG [15], GoogLeNet [12] va ResNet [14]. 5.1 Siqish darajasi va aniqligi Biz avval taklif qilingan DeepN-JPEG ramkamizning siqilish tezligini va tasniflash aniqligini baholaymiz. Taqqoslash uchun uchta asosiy dizayn amalga oshiriladi: JPEG (QF = 100, CR = 1) tomonidan siqilgan "original" ma'lumotlar to'plami, "RM-HF" siqilgan ma'lumotlar to'plami va "SAME-Q" siqilgan ma'lumotlar to'plami. Xususan, "RM-HF" siqishni tezligini yanada oshirish uchun yuqori chastotali yuqori chastotali komponentlarni kvantlash jadvalidan olib tashlash orqali JPEG-dan uzatiladi va "SAME-Q" barcha chastota tarkibiy qismlari uchun xuddi shunday kvantlash bosqichi bilan yanada tajovuzkor siqishni usulini anglatadi.





7-rasm barcha tanlangan nomzodlar uchun "ImageNet" ma'lumotlar bazasi "AlexNet" DNN asosida siqishni tezligi va aniqligini taqqoslaydi. "Asl" bilan solishtirganda "RM-HF" yuqori chastotali qismlarni (top-3 - top-9) olib tashlash orqali siqishni tezligini (∼ 1,1 × - ∼ 1,3 ×) biroz oshiradi, "SAME-Q" esa yaxshiroq natijaga erishadi. Siqish tezligi (∼ 1.5 × - ∼ 2 ×). Ammo ikkala sxemada ham siqilish tezligi oshgan sari aniqlik kamayadi (wrt "original"), aksincha, bizning "DeepNJPEG" eng yaxshi siqishni tezligini beradi ( Ya'ni traffic 3,5 ×) asl ma'lumot yig'indisiga o'xshash yuqori aniqlik saqlanganda, ma'lumot uzatish narxini pasaytirish va chuqur o'rganish vazifalarini bajarish uchun zamonaviy moslamalarni saqlash bo'yicha istiqbolli echim bor.



DeepN-JPEG umumiyligi.

8-rasmda ko'rsatilgandek, biz "DeepNJPEG" ning turli xil DNN arxitekturalariga, shu jumladan GoogLeNet, VGG-16, ResNet-34 va ResNet-50 qanday javob berishini o'rganish uchun bir necha zamonaviy DNN-larda o'z baholarimizni kengaytiramiz. , biz taklif qilgan "DeepN-JPEG" har doim tanlangan barcha DNN modellari uchun asl aniqlikni (wrt "Original") saqlab turishi mumkin. Garchi JPEG "DeepN-JPEG" singari siqishni tezligini JPEG QF qiymatini, masalan, QF reducing 50 ni sezilarli darajada kamaytirgan holda osonlikcha qo'lga kiritishi mumkin bo'lsa-da, bunday "ma'lumotlarni yo'qotadigan" agressiv siqishni barcha tanlanganlarning tasniflash samaradorligiga sezilarli ta'sir ko'rsatadi. DNN modellari. Bundan farqli o'laroq, "DeepN-JPEG" barcha DNN-lar uchun ham yuqori siqishni tezligini, ham aniqligini saqlab qolishi mumkin, shu tariqa umumiy yechim. 5.2 Quvvat iste'moli

Resurs cheklangan terminal qurilmalarida ma'lumotni yuklash natijasida iste'mol qilinadigan quvvat iste'moli hatto chuqur o'rganishda DNN hisoblashdan ham oshib ketishi mumkin [10].

Ma'lumotni siqish tegishli xarajatlarni kamaytirishi mumkin. Xuddi shu o'lchovlardan so'ng [10], 9-rasmda quvvatni pasaytirish natijalari ko'rsatilgan. Bizning "DeepN-JPEG" ma'lumotlarimizga ishlov berish asl ma'lumotlar to'plamiga solishtirganda aniqlikni kamaytirmasdan faqat 30% energiya sarflaydi. bir xil kvantizatsiya qiymati - kvantlash jadvalidagi 4), "DeepN-JPEG" ma'lumotni yanada samarali siqish tufayli respectively 2 × va × 3 × quvvatni pasayishiga erishishi mumkin.



6 Xulosa

Doimiy ravishda oshib boruvchi ma'lumot uzatish va saqlash katta energiya tejamkorligi va keng ko'lamli DNNlarning ishlashiga jiddiy ta'sir ko'rsatmoqda.

 Ushbu hujjatda biz saqlash va ma'lumotlar uzatishni osonlashtirish uchun DNN yo'naltirilgan tasvirni siqish ramkasini, ya'ni "DeepN-JPEG" ni taklif qilamiz. Hisoblash tizimi JPEG siqilishini ilhomlantirgan o'rniga, bizning echimimiz chastota asosida kvantlash xatosini samarali ravishda kamaytiradi. tarkibiy qismlarni tahlil qilish va to'g'rilangan kvantlash jadvali, va aniqlik buzilishisiz siqishni tezligini yanada oshiradi Bizning tajriba natijalari shuni ko'rsatadiki, "DeepN-JPEG" ∼ 3,5 × siqishni tezligini yaxshilashga erishadi va klassik aniqlik buzilishisiz an'anaviy JPEGning atigi 30% quvvat sarflaydi. , shuning uchun chuqur o'rganish uchun ma'lumotlarni saqlash va aloqa qilish uchun istiqbolli echim.

Adabiyotlar

[1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553,

pp. 436–444, 2015.

[2] C. Szegedy, “An overview of deep learning,” AITP 2016, 2016.

[3] D. Silver and D. Hassabis, “Alphago: Mastering the ancient game of go with

machine learning,” Research Blog, 2016.

[4] https://research.fb.com/category/facebook-ai-research-fair/.

[5] https://cloudplatform.googleblog.com/2016/05/

Google-supercharges-machine-learning-tasks-with-custom-chip.html.

[6] https://www.microsoft.com/en-us/research/research-area/

artificial-intelligence/.

[7] S. Soro and W. Heinzelman, “A survey of visual sensor networks,” Advances in

multimedia, vol. 2009, 2009.

[8] C. Liu, Q. Yang, B. Yan, J. Yang, X. Du, W. Zhu, H. Jiang, Q. Wu, M. Barnell, and

H. Li, “A memristor crossbar based computing engine optimized for high speed

and accuracy,” in VLSI (ISVLSI), 2016 IEEE Computer Society Annual Symposium

on. IEEE, 2016, pp. 110–115.

[9] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally, “Eie:

efficient inference engine on compressed deep neural network,” in Proceedings of

the 43rd International Symposium on Computer Architecture. IEEE Press, 2016,

pp. 243–254.

[10] Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang, “Neurosurgeon:

Collaborative intelligence between the cloud and mobile edge,” in

Proceedings of the Twenty-Second International Conference on Architectural Support

for Programming Languages and Operating Systems. ACM, 2017, pp. 615–629.

[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep

convolutional neural networks,” in Advances in neural information processing

systems, 2012, pp. 1097–1105.

[12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke,

and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of

the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.

[13] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with

neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.

[14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”

in Proceedings of the IEEE conference on computer vision and pattern recognition,

2016, pp. 770–778.

[15] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale

image recognition,” arXiv preprint arXiv:1409.1556, 2014.

[16] C. M. Bishop, Pattern recognition and machine learning. springer, 2006.

[17] J. Deng,W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale

Hierarchical Image Database,” in CVPR09, 2009.

[18] G. K. Wallace, “The jpeg still picture compression standard,” IEEE transactions on

consumer electronics, vol. 38, no. 1, pp. xviii–xxxiv, 1992.

[19] V. Ratnakar and M. Livny, “An efficient algorithm for optimizing dct quantization,”

IEEE Transactions on Image Processing, vol. 9, no. 2, pp. 267–270, 2000.

[20] X. Zhang, S. Wang, K. Gu, W. Lin, S. Ma, and W. Gao, “Just-noticeable differencebased

perceptual optimization for jpeg compression,” IEEE Signal Processing

Letters, vol. 24, no. 1, pp. 96–100, 2017.

[21] J. Chao, H. Chen, and E. Steinbach, “On the design of a novel jpeg quantization

table for improved feature detection performance,” in Image Processing (ICIP),

2013 20th IEEE International Conference on. IEEE, 2013, pp. 1675–1679.

[22] L.-Y. Duan, X. Liu, J. Chen, T. Huang, and W. Gao, “Optimizing jpeg quantization

table for low bit rate mobile visual search,” in Visual Communications and Image

Processing (VCIP), 2012 IEEE. IEEE, 2012, pp. 1–6.

[23] M. Hopkins, M. Mitzenmacher, and S. Wagner-Carena, “Simulated annealing for

jpeg quantization,” arXiv preprint arXiv:1709.00649, 2017.

[24] R. Reininger and J. Gibson, “Distributions of the two-dimensional dct coefficients

for images,” IEEE Transactions on Communications, vol. 31, no. 6, pp. 835–839, Jun

1983.

[25] B. Kaur, A. Kaur, and J. Singh, “Steganographic approach for hiding image in dct



domain,” International Journal of Advances in Engineering & Technology, vol. 1,

no. 3, p. 72, 2011.

[26] R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlab-like environment

for machine learning,” in BigLearn, NIPS Workshop, 2011.



[27] IJG. Independent jpeg group. [Online]. Available: http://www.ijg.org/
Download 438.56 Kb.

Do'stlaringiz bilan baham:
1   2   3   4




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling