關(guān)于我們
![]() ![]() |
Machine Learning in Remote Sensing Image Analysis ![]()
本書從自然語言圖像中面臨的圖像增強(qiáng)與修復(fù)、自然圖像語義分割算法展開,深入理解自然圖像。接著,進(jìn)行機(jī)器學(xué)習(xí)算法、深度學(xué)習(xí)算法的建模實現(xiàn)遙感圖像的語義分割問題。最后,通過算法,實現(xiàn)下游任務(wù)(比如精準(zhǔn)農(nóng)業(yè)中的分類問題和多源信息融合問題)。本書重點從以下三個部分著重展開,并且分別用章節(jié)的形式呈現(xiàn)。第一部分:機(jī)器學(xué)習(xí)相關(guān)算法研究與介紹,包括了機(jī)器學(xué)習(xí)、稀疏優(yōu)化以及深度學(xué)習(xí)建模。從機(jī)器學(xué)習(xí)基礎(chǔ)、貝葉斯概率建模、非線性濾波、稀疏表優(yōu)化、非線性濾波和稀疏優(yōu)化、非參數(shù)貝葉斯、融合對抗網(wǎng)絡(luò)等,解決圖像增強(qiáng)、圖像去噪、圖像重構(gòu)、多尺度融合以及深度學(xué)習(xí)網(wǎng)絡(luò)構(gòu)建的問題。第二部分:深度學(xué)習(xí)網(wǎng)絡(luò)與遙感圖像分析,包括了創(chuàng)造性提出的HRCNet、EfficientFormer、CCTNet、HYPER-LGNet等網(wǎng)絡(luò)結(jié)構(gòu),解決多尺度融合、輕量化、邊界模糊、細(xì)粒度分類、高光譜特征提取以及超參數(shù)優(yōu)化等問題。第三部分:部分應(yīng)用場景介紹,比如精準(zhǔn)農(nóng)業(yè)。從特征提取、狀態(tài)表示、圖像降噪、多源信息融合的角度出發(fā),解決遙感圖像分類、分割,以及作物病害監(jiān)測的問題。
張?zhí)煜,主要研究方向為遙感圖像分析,深度學(xué)習(xí),機(jī)器視覺以及多源信息融合。曾擔(dān)任國際會議ICRAIC會議workshop主席、IEEE自動駕駛旗艦會議IEEE VTC技術(shù)委員會委員,進(jìn)入全國博士后創(chuàng)新創(chuàng)業(yè)大賽總決賽,主持參與國家自然科學(xué)基金、博士后國際交流項目、北科大青年教師國際交流成長計劃、央企委托項目六項,發(fā)表SCI論文20篇。 莊培顯,主要研究方向為遙感圖像融合,水下圖像處理,稀疏表示,以及深度學(xué)習(xí)。曾擔(dān)任國際會議IEEE ICSIP 會議分會主席、《電子與信息學(xué)報》專題編委,福建省優(yōu)秀博士學(xué)位論文獲得者,主持2項國家自然科學(xué)基金項目和1項中國博士后科學(xué)基金面上項目,參與1項國家自然科學(xué)基金基礎(chǔ)科學(xué)中心項目、1項科技創(chuàng)新2030 新一代人工智能重大項目,發(fā)表SCI論文30余篇。 王麗君,主要研究方向為深度學(xué)習(xí),計算機(jī)視覺,圖像處理以及復(fù)雜系統(tǒng)的控制、優(yōu)化與決策。北京自動化學(xué)會理事、《智能機(jī)器人》期刊編委。近年來,先后承擔(dān)了國家重點研發(fā)計劃重點專項、國家自然科學(xué)基金以及企業(yè)重大工程等10余項科研項目,在國內(nèi)外高水平學(xué)術(shù)期刊和會議上發(fā)表論文30余篇,授權(quán)及受理發(fā)明專利8項。李江昀,主要研究方向為基于深度學(xué)習(xí)的圖像、視頻語義分割(遙感、醫(yī)療、工業(yè)等),工業(yè)智能、視覺感知關(guān)鍵技術(shù)及應(yīng)用,自然語言理解NLP、知識圖譜。近年來,獲得省部級面上項目及橫向課題20余項;在IEEE Transactions on Image Processing, IEEETransactions on Neural Networks and Learning Systems, IEEE Sensors Journal,Remote Sensing, Physics in Medicine and Biology等行業(yè)內(nèi)主流SCIEI期刊發(fā)表文章40余篇;授權(quán)國家發(fā)明專利8項,成功轉(zhuǎn)化3項,累計金額300萬。
1Blind Image Deblurring with Joint Extreme Channels and L0-Regularized Intensityand Gradient Priors 1.1introduction 1 1.2Proposed Model and Optimization Algorithm 2 1.3Experimental Results 5 1.4Conclusion 6 2Non-Blind Deconvolution with l1-Norm of High-Frequency Fidelity 2.1Introduction 7 2.2Proposed Method 9 2.2.1Overall Objective Function 9 2.2.2L1-Norm Fidelity of High-Frequency Images 9 2.2.3Numerical Solution 12 2.3Experiment Validation 13 2.3.1Extended Levin and Google Image Dataset Comparison 15 2.3.2Motion Blur Comparison 17 2.3.3Performance Comparison with Noise Effect 18 2.3.4High-Frequency Importance Validation 20 2.4Conclusion and Discussion 21 3A Novel Framework Method for Non-Blind Deconvolution Using Subspace ImagesPriors 3.1Introduction 23 3.2Previous Work of Priors Regularization 25 3.3The Motivation and Design of the Proposed Framework 26 3.3.1Motivation 26 3.3.2The Proposed Framework 29 3.4Experiment Validation 32 3.4.1Extended Levin and Google Image Dataset Comparison 34 3.4.2General Blurs Comparison 36 3.4.3Deblurring Performancewith Noise 37 3.4.4Comparison of Different Subspaces Decomposition 38 3.4.5Comparison of Different Integration Methods 39 3.5Conclusion and Discussion 40 4MRI Reconstruction with an Edge-Preserving Filtering Prior 4.1Introduction 41 4.2A MRI Reconstruction Algorithm With an Edge-Preserving FilteringPrior 43 4.2.1Model Construction 43 4.2.2Numerical Solution 46 4.3Experimental Results 48 4.3.1Experiments on Real-Valued MRI 50 4.3.2Experiments on Complex-Valued MRI 50 4.3.3Performance with Noise 52 4.3.4Edge-Preserving Filtering Validation 53 4.3.5Parameter Evaluation 54 4.3.6Computational Time 56 4.3.7Extensive Experiments on CT Reconstruction 56 4.4Conclusion 57 5Compressed Sensing MRI with Joint Image-Level and Patch-Level Priors 5.1Introduction 59 5.2MRI Reconstruction With Joint Image-Level and Patch-Level Priors 60 5.2.1Model Construction 60 5.2.2Optimization Algorithm 61 5.3Experimental Results 64 5.4Conclusion 65 6Mixed Noise Removal Based on A Novel Non-parametric Bayesian Sparse OutlierModel 6.1Introduction 67 6.2Novel Model Construction 69 6.3Model Inference 71 6.4Experimental Validation 73 6.5Conclusion and Discussion 77 7DewaterNet: A Fusion Adversarial Real Underwater Image Enhancement Network 7.1introduction 79 7.1.1Related Work 80 7.1.2Non model-based methods 80 7.1.3Model-based methods 81 7.1.4Deep learning-based methods 81 7.1.5Discussion 82 7.2Methodology 83 7.2.1Network architecture 83 7.2.2GAN objective function 84 7.3Experiments 85 7.3.1Setup 85 7.3.2Subjective assessment 87 7.3.3Objective assessment 90 7.3.4Ablation study 92 7.4Conclusion 94 8Underwater Image Enhancement Using a Multi-Scale Dense Generative AdversarialNetwork 8.1Introduction 95 8.2Related Work 96 8.3Methodology 98 8.3.1Generator Network 98 8.3.2Discriminator Network 100 8.3.3GAN objective function 101 8.4Experiments 101 8.4.1Setup 102 8.4.2Real-world underwater enhancement 103 8.4.3Synthetic underwater enhancement 104 8.4.4Ablation study and application tests 105 8.5Conclusion 106 9Compressed Sensing MRI via a Multi-scale Dilated Residual Convolution Network 9.1Introduction 109 9.2Related Work 111 9.2.1Residual learning 111 9.2.2Dilated convolution 111 9.2.3Concatenation 112 9.3Related Work 112 9.3.1Problem Formulation 112 9.3.2Proposed Block 112 9.3.3Network Architecture 115 9.4Experiment Results 115 9.4.1Experiments on Real-valued MRI With Different Masks 118 9.4.2Experiments on Complex-valued MRI With Different Masks 118 9.4.3Ablation Study 119 9.4.4Experiments in The Noisy Setting 121 9.4.5Discussions on Dilated Convolutions, The Number of Blocks andParameters. 122 9.4.6Experiments on Super-resolution 123 9.5Conclusion and Prospect 123 10Attention Guided Global Enhancement and Local Refinement Network for SemanticSegmentation 10.1Introduction 125 10.2Related Work 127 10.2.1Encoder-Decoder Models 127 10.2.2Global Context 127 10.2.3Local Context 128 10.2.4Relation Modeling andAttention Mechanism 128 10.3Proposed Approach 129 10.3.1Global Enhancement for Decoder Features 129 10.3.2Local Refinement for Encoder Features 131 10.3.3Context Fusion Block 133 10.3.4Overall Architecture 133 10.4Experiments and Results 134 10.4.1Experimental Settings 134 10.4.2Comparison With State-of-the-Arts 135 10.4.3Ablation Studies 136 10.4.4AGLN-Lite 141 10.4.5Understanding AGLN 142 10.5Conclusion 143 11 Land cover classification method by SVM and Sentinel–2 Satellite Imagery 11.1 Materials and problem 148 11.1.1Data sources 148 11.1.2Problems formulation 148 11.2Methodology 149 11.2.1Overall procedure 149 11.2.2Pre-process procedure 150 11.2.3Groundtruth labelling 150 11.2.4Feature selection 151 11.2.5Classifier selection: SVMs 153 11.3Classification results 155 11.3.1Index based approach 155 11.3.2Index related bands approach 155 11.3.3MI selected bands approach 156 11.3.4Full bands approach 156 11.3.5 Further discussions 156 11.4 Conclusions 157 12Optimized Random Forest: A Hyperparameter Tuning Method in Machine LearningAlgorithms 12.1Introduction 165 12.2Related Work 166 12.2.1Machine learning classifier 167 12.2.2Deep learning classifier 167 12.3Materials 167 12.3.1Sentinel-2 satellite imagery 167 12.3.2Study area 168 12.4Methodology 169 12.4.1Problems formulation 169 12.4.2Land cover classification framework 170 12.4.3Classification performance evaluation 172 12.5Results 174 12.5.1RGB band features 174 12.5.2Full multispectral band features 176 12.6Discussion 179 12.7Conclusions and Future Work 180 13Dense Semantic Labeling with Atrous Spatial Pyramid Pooling and Decoder forHigh-Resolution Remote Sensing Imagery 13.1Introduction 187 13.2Methods 190 13.2.1Encoder with ResNet and Atrous Spatial Pyramid Pooling 190 13.2.2Decoder and the Multi-scale Loss Function 191 13.2.3Dense Conditional Random Fields Based on Superpixel 194 13.3Results 194 13.3.1Datasets 194 13.3.2Preprocessing the Datasets 195 13.3.3Training Protocol and Metrics 196 13.3.4Experimental Results 196 13.4Evaluation and Discussion 199 13.4.1The Importance of Multi-scale Loss Function 199 13.4.2Comparison to DeepLab v3+ and Other the State-of-art Networks 200 13.4.3The Influence of Superpixel-based DenseCRF 202 13.5Conclusions 204 14HRCNet: High-Resolution Context Extraction Network for Semantic Segmentation ofRemote Sensing Images 14.1Introduction 205 14.2Related Work 207 14.2.1Remote Sensing Applications 207 14.2.2Model Design 207 14.3Methods 209 14.3.1The Basic HRNet 209 14.3.2Framework of theproposed HRCNet 210 14.3.3Light-Weight High-ResolutionNetwork (Light HRNet) 210 14.3.4Feature Enhancement Feature Pyramid (FEFP) Module 213 14.3.5Multi-level Loss Function 213 14.4Experiment 216 14.4.1Datasets 216 14.4.2Experiment Settings and Evaluation Metrics 217 14.4.3Training Data and Testing Data Preparation 218 14.4.4Experimental Results 219 14.5Discussion 226 14.5.1Ablation Experiments 226 14.5.2Improvements and Limitations. 228 14.6Conclusions 228 15Efficient Transformer for Remote Sensing Image Segmentation 15.1Introduction 231 15.2Related Work 233 15.3Methods 234 15.3.1Investigation of Basic Swin transformer backbone and uperhead 235 15.3.2Efficient architecture design 237 15.3.3Edge processing 239 15.4Experimental Results 242 15.4.1Datasets and experimental settings 242 15.4.2Study for Swin transformer 243 15.4.3Efficient transformer backbone and mlphead 247 15.4.4Edge processing methods 248 15.4.5Comparison to SOTA methods 249 15.5Discussion 252 15.6Conclusion 254 16Hyper-LGNet: Coupling Local and Global Feature for Hyperspectral ImageClassification 16.1Introduction 255 16.2Methodology 258 16.2.1Overview of conventional CNN and Transformer network 258 16.2.2Hyper-LGNet network architecture 259 16.2.3Experimental Settings 262 16.3Experiments and results 263 16.3.1Dataset introduction and division 263 16.3.2Classification results by the proposed method on threemainstream datasets266 16.3.3Ablation studies 271 16.4Conclusions and future work 274 17Hyper-ES2T: Efficient Spatial-Spectral Transformer for the Classification ofHyperspectral Remote Sensing Images 17.1Introduction 275 17.2Methodology 279 17.2.1A Brief Review of Transformer 279 17.2.2Efficient Spatial-Spectral Transformer Design 280 17.2.3Aggregated Feature Enhancement Module 283 17.2.4Discussion 284 17.3Experimental Results 285 17.3.1Experimental Datasets 285 17.3.2Experimental Settings and Evaluation Metrics 287 17.3.3Comparison with Previous SOTA Methods 287 17.3.4Ablation Study for Network Architecture 294 17.4Conclusion and Future Works 298 18CCTNet: Coupling CNN and Transformer Networks for Crop Segmentation of Remote sensing images 18.1Introduction 299 18.2Related Work 301 18.3Methods 303 18.3.1The CNN Based ResNet 303 18.3.2Basic CSwin Transformer 304 18.3.3Framework of theProposed CCTNet 305 18.3.4Two Designs for CNN and Transformer Fusion Module 306 18.3.5Loss Functions Design 308 18.4Experimental Results 309 18.4.1Dataset and Experimental Settings 309 18.4.2Methods Comparison on Barley Remote Sensing Dataset 312 18.4.3Study for the CNN and Transformer Fusion modules 313 18.4.4Ablation Experiments of the Auxiliary Loss Function 315 18.4.5Results of Different CNN and Transformer Model Sizes 315 18.4.6Study for the Improvements of Each Category 316 18.5Discussion 317 18.6Conclusion 318 19Ir-UNet: Irregular Segmentation U-Shape Network for wheat yellow rust detectionby UAV multispectral imagery 19.1Introduction 321 19.2Field experiment 323 19.2.1Experiment design 323 19.2.2UAV multispectral imaging system 324 19.2.3Data pre-processing and labelling 326 19.3Methodology 327 19.3.1Basic UNet network 328 19.3.2Proposed Ir-UNet network 328 19.3.3Performance metrics 331 19.3.4Ir-UNet algorithm settings 331 19.4Experimental results 332 19.4.1Data augmentation results 332 19.4.2Ablation studies 333 19.4.3Ir-UNet with various inputs 334 19.4.4Optimized band weight results 336 19.5Discussions 338 19.6Conclusions and future work 341 20Efficient DF-UNet: Dual-Flow architecture for Wheat Yellow Rust SeverityDetection by UAV multispectral imagery 20.1Introduction 343 20.2Experiment design and data collection 345 20.2.1Experiment field design 345 20.2.2UAV data collection and preprocessing 345 20.2.3Data labelling 346 20.2.4Field experiment discussion 347 20.3Methodology 348 20.3.1Framework of the lightweight DF-UNet 348 20.3.2Design of the Sparse Channel Attention Module 350 20.3.3Design of dual flow branches 351 20.3.4Selection of loss functions 352 20.3.5Efficient DF-UNet experiment settings 353 20.3.6Performance evaluation metrics 354 20.4Experimental results 354 20.4.1Exploration for lightweight architecture 354 20.4.2Comparison to state-of-the-art methods 355 20.4.3Classification results for rust severity 357 20.4.4Exploration of Sliding Window 363 20.5Discussion 365 20.6Conclusion and future work 365 21Canopy Cover Collection from UAV Multi-spectral Imagery by Machine LearningAlgorithms 21.1Introduction 367 21.2Materials 368 21.2.1Experiment fields 368 21.2.2Multi-spectral aerial image 369 21.2.3Pix4D mapper for image preprocessing 371 21.2.4Data collection discussions 371 21.3Methodology 372 21.3.1Framework 372 21.3.2Data labelling and spectral analysis 373 21.3.3Image classification 373 21.4Performance evaluation and canopy cover calculation 375 21.4.1Classification performance evaluation 375 21.4.2Canopy cover calculation 377 21.5Conclusions 378 22Bayesian Calibration of the AquaCrop Model 22.1Introduction 379 22.2Methodology 380 22.2.1Bayesian calibration framework 380 22.2.2AquaCrop-OS model 380 22.2.3Bayesian calibration method 382 22.2.4Model calibration implementation 385 22.3Systematic validation 386 22.3.1Monte Carlo simulation verification using biomass and CCmeasurements 386 22.3.2Experimental evaluation 387 22.4Results 387 22.4.1Results of MC simulation 388 22.4.2Results of experimental validation 390 22.4.3Regression analysis 393 22.5Conclusions 395 23State Estimation for the AquaCrop Model Using Sensitivity Informed ParticleFilter 23.1Introduction 397 23.2Methodology 398 23.2.1Recursive AquaCrop model 400 23.2.2Sensitivity informed particle filter framework 400 23.3Systematic validation 404 23.3.1Monte Carlo simulation verification 405 23.3.2Experimental evaluation 405 23.4Results 406 23.4.1Results of MC simulation 406 23.4.2Results of experimental validation 408 23.5Conclusions 409 References 413
你還可能感興趣
我要評論
|