This work was awarded an ORAL presentation at the MIDL 2018 conference as well as received the CIFAR Student Travel Grant award.
The code has been made publicly available at https://github.com/lalonderodney/SegCaps. Live code demo: drive.google.com/drive/folders/1MhebBrDsh3N5HSntj2Zl5edx56_IkXkN?usp=sharing.
In this work, we focus on extending the recently introduced "capsule networks" to the task of object segmentation in larger images. The original paper by Rodney LaLonde and Ulas Bagci, "Capsules for Object Segmentation" can be found at https://arxiv.org/abs/1804.04241.
Video of Oral Presentation
This work appeared as an oral presentation at the recent International conference on Medical Imaging with Deep Learning in Amsterdam, 4 ‑ 6th July 2018.
Brief Description of Work
Convolutional Neural Networks (CNNs) have shown remarkable results over the last several years for a wide range of computer vision tasks. A new architecture recently introduced by Sabour et al., referred to as a capsule networks with dynamic routing, has shown great initial results for digit recognition and small image classification. Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature.
We extend the idea of convolutional capsules with locally-connected routing and propose the concept of deconvolutional capsules. Further, we extend the masked reconstruction to reconstruct the positive input class. The proposed convolutional-deconvolutional capsule network, called SegCaps, shows strong results for the task of object segmentation with substantial decrease in parameter space.
As an example application, we applied the proposed SegCaps to segment pathological lungs from low dose CT scans and compared its accuracy and efficiency with other U-Net-based architectures. SegCaps is able to handle large image sizes (512 x 512) as opposed to baseline capsules (typically less than 32 x 32). The proposed SegCaps reduced the number of parameters of U-Net architecture by 95.4% while still providing a better segmentation accuracy.
Feel free to leave any comments or questions about this project by signing in at the bottom of the page.
Hi Rodney, We are working on a project based on your article, which is very interesting. We read your paper on arxiv and went over your tensorflow implementation, and we have a question regarding the ConvCapsuleLayer implementation. As described in the article, there is a different transformation matrix M for each capsule in the child layer and parent layer (M_ij), however, in the code implementation it seems that there is a matrix W for each parent capsule but the same matrix is used for all the child capsules (W_j). We would be happy if you can help us understand this gap. Thank in advance, Shira And Tzvi, (Tel-Aviv University, Israel)
Hello:
I read your paper on arXiv and download your project code from Github. However, I am confused on your rounting algorithm. I haven't understand how the child capsule routing to the parent capsule within a defined spatially-local kernel.
Looking for help~~~~
Best wish.
Evan
Shantou University,China
Hello there,
I am trying to implement a 3D version of your capsule networks. Any help would be deeply appreciated.
Thanks
-Chandan
Hello Jenny,
Thank you for leaving your comment. Upon review, it would appear there was some error with the key in Figure 2. As you mention the item "skip connection" is denoted by a black arrow. However, this is supposed to be a bright green arrow. The green arrows showing the skip connections are visible in the figure, although I am not sure why the key is not green as it should be. Hopefully this fixes the confusion. I'll see if I can find the original file for that figure and update the key.
Best regards,
Rodney
Hello there! I'm currently reading your paper on arXiv, and noticed that there is a key for skip connections in Figure 2 (denoted by a black arrow), but I do not see this being depicted anywhere in the architecture diagram. Based on your repository in GitHub, it does seem as though skip connections are being used (by the way, I commend you on open-sourcing your code - thank you very much for contributing to reproducible science!). In any case, thought this would be something worth updating. Best, Jenny (University of California, San Diego)