This paper presents a deep learning structure for identifying underwater marine species and classification using an integrated feature fusion architecture that leverages multiple CNN architectures. It enhances underwater image recognition with the proposed system by fusing the features from MobileNetV2, EfficientNetB0, and InceptionV3. This collaborative way of extracting features captures both fine-grained textures and high-level spatial representations, dealing with several visual issues related to underwater turbidity, low illumination, and color distortion of pictures. The model is trained and tested using a balanced dataset of marine species. Generalization is improved with resizing, normalizing, and augmentation of images. Some performance metrics involve accuracy, F1-score, recall, and precision in classification.The results have shown significant elevations compared with individual CNN models and thus prove the benefits of feature fusion in complex underwater scenarios. We would like to further implement the developed framework using PyTorch on Google Colab to enable efficient computation and easy scalability. In general, the system contributes to ocean biodiversity research and ecological conservation, making real-time monitoring of marine life possible.