[1] |
Otsu, N. (1979) A Threshold Selection Method from Gray-Level Histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9, 62-66. https://doi.org/10.1109/tsmc.1979.4310076 |
[2] |
Zhang, Y. (2006) An Overview of Image and Video Segmentation in the Last 40 Years. In: Zhang, Y.J., Ed., Advances in Image and Video Segmentation, IGI Global, 1-16. https://doi.org/10.4018/978-1-59140-753-9.ch001 |
[3] |
Khan, J.F., Bhuiyan, S.M.A. and Adhami, R.R. (2011) Image Segmentation and Shape Analysis for Road-Sign Detection. IEEE Transactions on Intelligent Transportation Systems, 12, 83-96. https://doi.org/10.1109/tits.2010.2073466 |
[4] |
Lecun, Y., Bottou, L., Bengio, Y. and Haffner, P. (1998) Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86, 2278-2324. https://doi.org/10.1109/5.726791 |
[5] |
Fu, Z., Li, J. and Hua, Z. (2022) Deau-net: Attention Networks Based on Dual Encoder for Medical Image Segmentation. Computers in Biology and Medicine, 150, Article ID: 106197. https://doi.org/10.1016/j.compbiomed.2022.106197 |
[6] |
司明明, 陈玮, 胡春燕, 等. 融合Resnet50和U-Net的眼底彩色血管图像分割[J]. 电子科技, 2021, 34(8): 19-24. |
[7] |
Ronneberger, O., Fischer, P. and Brox, T. (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W. and Frangi, A., Eds., Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Springer, 234-241. https://doi.org/10.1007/978-3-319-24574-4_28 |
[8] |
Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., et al. (2023) Segment Anything. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, 1-6 October 2023, 3992-4003. https://doi.org/10.1109/iccv51070.2023.00371 |
[9] |
Ravi, N., Gabeur, V., Hu, Y.T., Hu, R., Ryali, C., Ma, T., Khedr, H., Rädle, R., Rolland, C., Gustafson, L., et al. (2024) Sam 2: Segment Anything in Images and Videos. arXiv: 2408.00714. |
[10] |
Chen, T., Zhu, L., Ding, C., Cao, R., Wang, Y., Zhang, S., et al (2023) SAM-Adapter: Adapting Segment Anything in Underperformed Scenes. 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Paris, 2-6 October 2023, 3359-3367. https://doi.org/10.1109/iccvw60793.2023.00361 |
[11] |
Huang, D., Xiong, X., Ma, J., Li, J., Jie, Z., Ma, L., et al. (2024) AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 16-22 June 2024, 3205-3215. https://doi.org/10.1109/cvpr52733.2024.00309 |
[12] |
Liu, Y., Zhu, M., Li, H., Chen, H., Wang, X. and Shen, C. (2024) Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching. ICLR. |
[13] |
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Deh-ghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J. and Houlsby, N. (2021) An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. ICLR. |
[14] |
Ravi, N., Gabeur, V., Hu, Y.T., et al. (2024) Sam 2: Segment Anything in Images and Videos. arXiv preprint arXiv:2408.00714. |
[15] |
Woo, S., Park, J., Lee, J. and Kweon, I.S. (2018) CBAM: Convolutional Block Attention Module. In: Ferrari, V., Hebert, M., Sminchisescu, C. and Weiss, Y., Eds., Computer Vision—ECCV 2018, Springer, 3-19. https://doi.org/10.1007/978-3-030-01234-2_1 |
[16] |
Vasques, B.I., Pereira, J.M., Santos, A. and Neves, L.A. (2017) A Public Dataset for Thyroid Ultrasound Image Analysis. Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, 17-20 September 2017. |
[17] |
Gong, H., Chen, G., Wang, R., Xie, X., Mao, M., Yu, Y., et al. (2021) Multi-Task Learning for Thyroid Nodule Segmentation with Thyroid Region Prior. 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, 13-16 April 2021, 257-261. https://doi.org/10.1109/isbi48211.2021.9434087 |
[18] |
Jha, D., Smedsrud, P.H., Riegler, M.A., Halvorsen, P., de Lange, T., Johansen, D., et al. (2019) Kvasir-SEG: A Segmented Polyp Dataset. In: Ro, Y., et al., Eds., MultiMedia Modeling, MMM 2020, Springer, 451-462. https://doi.org/10.1007/978-3-030-37734-2_37 |
[19] |
Bernal, J., Sánchez, F.J., Fernández-Esparrach, G., Gil, D., Rodríguez, C. and Vilariño, F. (2015) WM-DOVA Maps for Accurate Polyp Highlighting in Colonoscopy: Validation Vs. Saliency Maps from Physicians. Computerized Medical Imaging and Graphics, 43, 99-111. https://doi.org/10.1016/j.compmedimag.2015.02.007 |
[20] |
Tajbakhsh, N., Gurudu, S.R. and Liang, J. (2016) Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information. IEEE Transactions on Medical Imaging, 35, 630-644. https://doi.org/10.1109/tmi.2015.2487997 |
[21] |
Silva, J., Histace, A., Romain, O., Dray, X. and Granado, B. (2013) Toward Embedded Detection of Polyps in WCE Images for Early Diagnosis of Colorectal Cancer. International Journal of Computer Assisted Radiology and Surgery, 9, 283-293. https://doi.org/10.1007/s11548-013-0926-3 |
[22] |
Loshchilov, I. and Hutter, F. (2019) Decoupled Weight Decay Regularization. International Conference on Learning Representations (ICLR), New Orleans, 6-9 May 2019. |
[23] |
Milletari, F., Navab, N. and Ahmadi, S. (2016) V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. 2016 Fourth International Conference on 3D Vision (3DV), Stanford, 25-28 October 2016, 565-571. https://doi.org/10.1109/3dv.2016.79 |
[24] |
Dice, L.R. (1945) Measures of the Amount of Ecologic Association between Species. Ecology, 26, 297-302. https://doi.org/10.2307/1932409 |
[25] |
Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J. and Zisserman, A. (2009) The Pascal Visual Object Classes (VOC) Challenge. International Journal of Computer Vision, 88, 303-338. https://doi.org/10.1007/s11263-009-0275-4 |
[26] |
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K. and Yuille, A.L. (2017) Rethinking Atrous Convolution for Se-mantic Image Segmentation. arXiv: 1706.05587. |
[27] |
Chen, L., Zhu, Y., Papandreou, G., Schroff, F. and Adam, H. (2018) Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C. and Weiss, Y., Eds., Computer Vision—ECCV 2018., Springer, 833-851. https://doi.org/10.1007/978-3-030-01234-2_49 |
[28] |
Zhang, Y., Zhang, Q., Wang, X. and Zhang, Y. (2022) Devil Is in Channels: Contrastive Single Domain Generalization for Medical Image Segmentation. arXiv: 2209.07211. |
[29] |
Trinh, Q.H. (2023) Meta-Polyp: A Baseline for Efficient Polyp Segmentation. arXiv: 2305.07848. |
[30] |
Huang, C.H., Wu, H.Y. and Lin, Y.L. (2021) HarDNet-MSEG: A Simple Encoder-Decoder Polyp Segmentation Neural Network that Achieves over 0.9 Mean Dice and 86 FPS. arXiv: 2101.07172. |
[31] |
Wei, J., Hu, Y., Zhang, R., Li, Z., Zhou, S.K. and Cui, S. (2021) Shallow Attention Network for Polyp Segmentation. In: de Bruijne, M., et al., Eds., Medical Image Computing and Computer Assisted Intervention—MICCAI 2021, Springer, 699-708. https://doi.org/10.1007/978-3-030-87193-2_66 |
[32] |
Zhao, X., Zhang, L. and Lu, H. (2021) Automatic Polyp Segmentation via Multi-Scale Subtraction Network. In: de Bruijne, M., et al., Eds., Medical Image Computing and Computer Assisted Intervention—MICCAI 2021, Springer, 120-130. https://doi.org/10.1007/978-3-030-87193-2_12 |
[33] |
Wang, J., Huang, Q., Tang, F., Meng, J., Su, J. and Song, S. (2022) Stepwise Feature Fusion: Local Guides Global. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S. and Li, S., Eds., Medical Image Computing and Computer Assisted Intervention—MICCAI 2022, Springer, 110-120. https://doi.org/10.1007/978-3-031-16437-8_11 |
[34] |
Zhou, T., Zhou, Y., He, K., Gong, C., Yang, J., Fu, H., et al. (2023) Cross-Level Feature Aggregation Network for Polyp Segmentation. Pattern Recognition, 140, Article ID: 109555. https://doi.org/10.1016/j.patcog.2023.109555 |
[35] |
Zhang, J., Ma, K., Kapse, S., Saltz, J., Vakalopoulou, M., Prasanna, P., et al. (2023) SAM-Path: A Segment Anything Model for Semantic Segmentation in Digital Pathology. In: Celebi, M.E., et al., Eds., Medical Image Computing and Computer Assisted Intervention—MICCAI 2023 Workshops, Springer, 161-170. https://doi.org/10.1007/978-3-031-47401-9_16 |