SEARCH
Search Details
NISHIO MizuhoGraduate School of Medicine / Faculty of Medical SciencesAssociate Professor
Researcher basic information
■ Research Areas- Informatics / Software
- Informatics / Biological, health, and medical informatics
- Life sciences / Radiology
- Jan. 2024 - Present, BMC Medical Informatics and Decision Making, Senior Editorial Board Member
- Aug. 2022 - Present, Frontiers in Nuclear Medicine, Associate Editor
- Aug. 2021 - Present, Cancers, Lead Guest Editor, Special Issue "Multi-Modality Imaging and Multi-Omics Approach of Cancers With Machine Learning/Deep Learning"
- Nov. 2020 - Present, International Journal of Imaging Systems and Technology, EDITORIAL BOARD MEMBER
- Sep. 2020 - Dec. 2023, BMC Medical Informatics and Decision Making, EDITORIAL BOARD MEMBER
- Nov. 2021 - Aug. 2022, Frontiers in Nuclear Medicine, Review Editor
- Jan. 2021 - Apr. 2022, Frontiers in Artificial Intelligence, Lead Guest Editor, Research Topic "Automatic Lung Nodule Detection with Deep Learning"
- Feb. 2020 - Apr. 2021, Applied Science, Lead Guest Editor, Special Issue "Machine Learning/Deep Learning in Medical Image Processing"
- Jun. 2019 - Nov. 2020, Heliyon, Editorial Advisory Board Member
- Apr. 2019 - May 2019, Heliyon, Editorial Board Member
Research activity information
■ Award- May 2023 The 50 most cited articles on artificial intelligence for lung cancer imaging, Computer-aided diagnosis of lung nodule classification between benign nodule, primary lung cancer, and metastatic lung cancer at different image size using deep convolutional neural network with transfer learning
- Apr. 2022 Annals of Nuclear Medicine, 2021 Frequently Cited Papers, Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on F-18 FDG-PET/CT.
- Feb. 2021 Translational Lung Cancer Research, Reviewer of the Month (February, 2021)
- Dec. 2019 PLOS ONE., The top 10% most cited PLOS ONE authors of 2018, Computer-aided diagnosis of lung nodule classification between benign nodule, primary lung cancer, and metastatic lung cancer at different image size using deep convolutional neural network with transfer learning
- Mar. 2019 Insights into Imaging, Most Downloaded Paper Award 2018, Convolutional neural networks: an overview and application in radiology
- BACKGROUND/OBJECTIVES: This study aimed to investigate the accuracy of Tumor, Node, Metastasis (TNM) classification based on radiology reports using GPT3.5-turbo (GPT3.5) and the utility of multilingual large language models (LLMs) in both Japanese and English. METHODS: Utilizing GPT3.5, we developed a system to automatically generate TNM classifications from chest computed tomography reports for lung cancer and evaluate its performance. We statistically analyzed the impact of providing full or partial TNM definitions in both languages using a generalized linear mixed model. RESULTS: The highest accuracy was attained with full TNM definitions and radiology reports in English (M = 94%, N = 80%, T = 47%, and TNM combined = 36%). Providing definitions for each of the T, N, and M factors statistically improved their respective accuracies (T: odds ratio [OR] = 2.35, p < 0.001; N: OR = 1.94, p < 0.01; M: OR = 2.50, p < 0.001). Japanese reports exhibited decreased N and M accuracies (N accuracy: OR = 0.74 and M accuracy: OR = 0.21). CONCLUSIONS: This study underscores the potential of multilingual LLMs for automatic TNM classification in radiology reports. Even without additional model training, performance improvements were evident with the provided TNM definitions, indicating LLMs' relevance in radiology contexts.Oct. 2024, Cancers, 16(21) (21), English, International magazine[Refereed]Scientific journal
- RATIONALE AND OBJECTIVES: To develop and validate a deep learning (DL) model to automatically diagnose muscle-invasive bladder cancer (MIBC) on MRI with Vision Transformer (ViT). MATERIALS AND METHODS: This multicenter retrospective study included patients with BC who reported to two institutions between January 2016 and June 2020 (training dataset) and a third institution between May 2017 and May 2022 (test dataset). The diagnostic model for MIBC and the segmentation model for BC on MRI were developed using the training dataset with 5-fold cross-validation. ViT- and convolutional neural network (CNN)-based diagnostic models were developed and compared for diagnostic performance using the area under the curve (AUC). The performance of the diagnostic model with manual and auto-generated regions of interest (ROImanual and ROIauto, respectively) was validated on the test dataset and compared to that of radiologists (three senior and three junior radiologists) using Vesical Imaging Reporting and Data System scoring. RESULTS: The training and test datasets included 170 and 53 patients, respectively. Mean AUC of the top 10 ViT-based models with 5-fold cross-validation outperformed those of the CNN-based models (0.831 ± 0.003 vs. 0.713 ± 0.007-0.812 ± 0.006, p < .001). The diagnostic model with ROImanual achieved AUC of 0.872 (95 % CI: 0.777, 0.968), which was comparable to that of junior radiologists (AUC = 0.862, 0.873, and 0.930). Semi-automated diagnosis with the diagnostic model with ROIauto achieved AUC of 0.815 (95 % CI: 0.696, 0.935). CONCLUSION: The DL model effectively diagnosed MIBC. The ViT-based model outperformed CNN-based models, highlighting its utility in medical image analysis.Aug. 2024, Heliyon, 10(16) (16), e36144, English, International magazine[Refereed]Scientific journal
- PURPOSE: Flow-diverter (FD) stents were developed to treat aneurysms that are difficult to treat with conventional coiling or surgery. This study aimed to compare usefulness of Silent MRA and TOF (time of flight) -MRA in patients with aneurysms after FD placement. MATERIALS AND METHODS: We retrospectively collected images from 22 patients with 23 internal carotid artery aneurysms treated with FD. Two radiologists conducted MRA and DSA experiments. In the first reading experiment, the radiologists evaluated the aneurysm filling by employing Silent MRA and TOF-MRA and utilizing the modified O'Kelly-Marotta (OKM) scale, a four-class classification system for aneurysms after FD placement. We then calculated the agreement between the modified OKM scale on MRA and the original OKM scale on DSA. In the second reading experiment, the radiologists rated blood flow within the FD using a five-point scale. RESULTS: The weighted kappa value of the OKM scale between DSA and TOF-MRA was 0.436 (moderate agreement), and that between DSA and Silent MRA was 0.943 (almost perfect agreement). The accuracies for the four-class classification were 0.435 and 0.870 for TOF-MRA and Silent MRA, respectively. The mean score of blood flow within FD for TOF-MRA was 2.43 ± 0.90 and that for Silent MRA was 3.04 ± 1.02 (P < 0.001). CONCLUSION: Silent MRA showed a higher degree of agreement than TOF-MRA in aneurysm filling with DSA. In addition, Silent MRA was significantly superior to TOF-MRA in depicting blood flow within the FD. Therefore, Silent MRA is clinically useful for the follow-up of patients after FD placement.Aug. 2024, Japanese journal of radiology, 42(12) (12), 1403 - 1412, English, Domestic magazine[Refereed]Scientific journal
- Jun. 2024, European radiology, 34(12) (12), 7696 - 7697, English, International magazine
- PURPOSE: To investigate the possibility of distinguishing between IgG4-related ophthalmic disease (IgG4-ROD) and orbital MALT lymphoma using artificial intelligence (AI) and hematoxylin-eosin (HE) images. METHODS: After identifying a total of 127 patients from whom we were able to procure tissue blocks with IgG4-ROD and orbital MALT lymphoma, we performed histological and molecular genetic analyses, such as gene rearrangement. Subsequently, pathological HE images were collected from these patients followed by the cutting out of 10 different image patches from the HE image of each patient. A total of 970 image patches from the 97 patients were used to construct nine different models of deep learning, and the 300 image patches from the remaining 30 patients were used to evaluate the diagnostic performance of the models. Area under the curve (AUC) and accuracy (ACC) were used for the performance evaluation of the deep learning models. In addition, four ophthalmologists performed the binary classification between IgG4-ROD and orbital MALT lymphoma. RESULTS: EVA, which is a vision-centric foundation model to explore the limits of visual representation, was the best deep learning model among the nine models. The results of EVA were ACC = 73.3% and AUC = 0.807. The ACC of the four ophthalmologists ranged from 40 to 60%. CONCLUSIONS: It was possible to construct an AI software based on deep learning that was able to distinguish between IgG4-ROD and orbital MALT. This AI model may be useful as an initial screening tool to direct further ancillary investigations.May 2024, Graefe's archive for clinical and experimental ophthalmology = Albrecht von Graefes Archiv fur klinische und experimentelle Ophthalmologie, 262(10) (10), 3355 - 3366, English, International magazine[Refereed]Scientific journal
- BACKGROUND AND PURPOSE: Mean pulmonary artery pressure (mPAP) is a key index for chronic thromboembolic pulmonary hypertension (CTEPH). Using machine learning, we attempted to construct an accurate prediction model for mPAP in patients with CTEPH. METHODS: A total of 136 patients diagnosed with CTEPH were included, for whom mPAP was measured. The following patient data were used as explanatory variables in the model: basic patient information (age and sex), blood tests (brain natriuretic peptide (BNP)), echocardiography (tricuspid valve pressure gradient (TRPG)), and chest radiography (cardiothoracic ratio (CTR), right second arc ratio, and presence of avascular area). Seven machine learning methods including linear regression were used for the multivariable prediction models. Additionally, prediction models were constructed using the AutoML software. Among the 136 patients, 2/3 and 1/3 were used as training and validation sets, respectively. The average of R squared was obtained from 10 different data splittings of the training and validation sets. RESULTS: The optimal machine learning model was linear regression (averaged R squared, 0.360). The optimal combination of explanatory variables with linear regression was age, BNP level, TRPG level, and CTR (averaged R squared, 0.388). The R squared of the optimal multivariable linear regression model was higher than that of the univariable linear regression model with only TRPG. CONCLUSION: We constructed a more accurate prediction model for mPAP in patients with CTEPH than a model of TRPG only. The prediction performance of our model was improved by selecting the optimal machine learning method and combination of explanatory variables.Corresponding, Apr. 2024, PloS one, 19(4) (4), e0300716, English, International magazine[Refereed]Scientific journal
- Elsevier BV, Mar. 2024, Informatics in Medicine Unlocked, 46, 101465 - 101465[Refereed]Scientific journal
- RATIONALE AND OBJECTIVES: Pericardial fat (PF)-the thoracic visceral fat surrounding the heart-promotes the development of coronary artery disease by inducing inflammation of the coronary arteries. To evaluate PF, we generated pericardial fat count images (PFCIs) from chest radiographs (CXRs) using a dedicated deep-learning model. MATERIALS AND METHODS: We reviewed data of 269 consecutive patients who underwent coronary computed tomography (CT). We excluded patients with metal implants, pleural effusion, history of thoracic surgery, or malignancy. Thus, the data of 191 patients were used. We generated PFCIs from the projection of three-dimensional CT images, wherein fat accumulation was represented by a high pixel value. Three different deep-learning models, including CycleGAN were combined in the proposed method to generate PFCIs from CXRs. A single CycleGAN-based model was used to generate PFCIs from CXRs for comparison with the proposed method. To evaluate the image quality of the generated PFCIs, structural similarity index measure (SSIM), mean squared error (MSE), and mean absolute error (MAE) of (i) the PFCI generated using the proposed method and (ii) the PFCI generated using the single model were compared. RESULTS: The mean SSIM, MSE, and MAE were 8.56 × 10-1, 1.28 × 10-2, and 3.57 × 10-2, respectively, for the proposed model, and 7.62 × 10-1, 1.98 × 10-2, and 5.04 × 10-2, respectively, for the single CycleGAN-based model. CONCLUSION: PFCIs generated from CXRs with the proposed model showed better performance than those generated with the single model. The evaluation of PF without CT may be possible using the proposed method.Corresponding, Mar. 2024, Academic radiology, 31(3) (3), 822 - 829, English, International magazine[Refereed]Scientific journal
- (一社)日本核医学会, 2024, 核医学, 61(Suppl.) (Suppl.), S173 - S173, Japanese胸部領域PET/MRIにおける深層学習併用減弱補正法の再現性検討
- PURPOSE: This study aimed to enhance the multidimensional nominal response model (MDNRM) for multiclass classification in diagnostic radiology. MATERIALS AND METHODS: This retrospective study involved the extension of the conventional nominal response model (NRM) to create the two-parameter MDNRM (2PL-MDNRM). Seven models of MDNRM, including the original MDNRM and subtypes of 2PL-MDNRM, were employed to estimate test-takers' abilities and test item complexity. These models were applied to a clinical diagnostic radiology dataset. Rhat values were calculated to evaluate model convergence. Additionally, values of the widely applicable information criterion (wAIC) and Pareto-smoothed importance sampling leave-one-out cross-validation (LOO) were calculated to evaluate the goodness of fit of the seven models. The best-performing model was selected based on the values of wAIC and LOO. Probability of direction (PD) was used to evaluate whether one estimated parameter significantly differed. RESULTS: All estimated parameters across the seven models demonstrated Rhat values below 1.10, indicating stable convergence. The best wAIC and LOO values (988 and 1,121, respectively) were achieved with 2PL-MDNRM r using the truncated normal distribution and 2PL-MDNRM a using the truncated normal distribution. Notably, one test-taker (radiologist) exhibited significantly superior ability compared to another based on PD results from the best models, while no significant difference was observed in nonoptimal models. CONCLUSION: 2PL-MDNRM successfully achieved parameter estimation convergence, and its superiority over the original MDNRM was demonstrated through wAIC and LOO values.2024, PeerJ. Computer science, 10, e2380, English, International magazine[Refereed]Scientific journal
- 2024, Nihon Hoshasen Gijutsu Gakkai zasshi, 80(6) (6), 673 - 678, Japanese, Domestic magazine[Invited]Scientific journal
- PURPOSE: To examine the molecular biological differences between conjunctival mucosa-associated lymphoid tissue (MALT) lymphoma and orbital MALT lymphoma in ocular adnexa lymphoma. METHODS: Observational case series. A total of 129 consecutive, randomized cases of ocular adnexa MALT lymphoma diagnosed histopathologically between 2008 and 2020.Total RNA was extracted from formalin-fixed paraffin-embedded tissue from ocular adnexa MALT lymphoma, and RNA-sequencing was performed. Orbital MALT lymphoma gene expression was compared with that of conjunctival MALT lymphoma. Gene set (GS) analysis detecting for gene set cluster was performed in RNA-sequence. Related proteins were further examined by immunohistochemical staining. In addition, artificial segmentation image used to count stromal area in HE images. RESULTS: GS analysis showed differences in expression in 29 GS types in primary orbital MALT lymphoma (N=5,5, FDR q-value <0.25). The GS with the greatest difference in expression was the GS of epithelial-mesenchymal transition (EMT). Based on this GS change, immunohistochemical staining was added using E-cadherin as an epithelial marker and vimentin as a mesenchymal marker for EMT. There was significant staining of vimentin in orbital lymphoma (P<0.01, N=129) and of E-cadherin in conjunctival lesions (P=0.023, N=129). Vimentin staining correlated with Ann Arbor staging (1 versus >1) independent of age and sex on multivariate analysis (P=0.004). Stroma area in tumor were significant difference(P<0.01). CONCLUSION: GS changes including EMT and stromal area in tumor were used to demonstrate the molecular biological differences between conjunctival MALT lymphoma and orbital MALT lymphoma in ocular adnexa lymphomas.2024, Frontiers in oncology, 14, 1277749 - 1277749, English, International magazine[Refereed]Scientific journal
- Dec. 2023, Proceedings of the 17th NTCIR Conference on Evaluation of Information Access Technologies, NTCIR, 200 - 207[Refereed]International conference proceedings
- Dec. 2023, Proceedings of the 17th NTCIR Conference on Evaluation of Information Access Technologies, NTCIR, 155 - 162[Refereed]International conference proceedings
- OBJECTIVES: To build preoperative prediction models with and without MRI for regional lymph node metastasis (r-LNM, pelvic and/or para-aortic LNM (PENM/PANM)) and for PANM in endometrial cancer using established risk factors. METHODS: In this retrospective two-center study, 364 patients with endometrial cancer were included: 253 in the model development and 111 in the external validation. For r-LNM and PANM, respectively, best subset regression with ten-time fivefold cross validation was conducted using ten established risk factors (4 clinical and 6 imaging factors). Models with the top 10 percentile of area under the curve (AUC) and with the fewest variables in the model development were subjected to the external validation (11 and 4 candidates, respectively, for r-LNM and PANM). Then, the models with the highest AUC were selected as the final models. Models without MRI findings were developed similarly, assuming the cases where MRI was not available. RESULTS: The final r-LNM model consisted of pelvic lymph node (PEN) ≥ 6 mm, deep myometrial invasion (DMI) on MRI, CA125, para-aortic lymph node (PAN) ≥ 6 mm, and biopsy; PANM model consisted of DMI, PAN, PEN, and CA125 (in order of correlation coefficient β values). The AUCs were 0.85 (95%CI: 0.77-0.92) and 0.86 (0.75-0.94) for the external validation, respectively. The model without MRI for r-LNM and PANM showed AUC of 0.79 (0.68-0.89) and 0.87 (0.76-0.96), respectively. CONCLUSIONS: The prediction models created by best subset regression with cross validation showed high diagnostic performance for predicting LNM in endometrial cancer, which may avoid unnecessary lymphadenectomies. CLINICAL RELEVANCE STATEMENT: The prediction risks of lymph node metastasis (LNM) and para-aortic LNM can be easily obtained for all patients with endometrial cancer by inputting the conventional clinical information into our models. They help in the decision-making for optimal lymphadenectomy and personalized treatment. KEY POINTS: •Diagnostic performance of lymph node metastases (LNM) in endometrial cancer is low based on size criteria and can be improved by combining with other clinical information. •The optimized logistic regression model for regional LNM consists of lymph node ≥ 6 mm, deep myometrial invasion, cancer antigen-125, and biopsy, showing high diagnostic performance. •Our model predicts the preoperative risk of LNM, which may avoid unnecessary lymphadenectomies.Oct. 2023, European radiology, 34(5) (5), 3375 - 3384, English, International magazine[Refereed]Scientific journal
- To evaluate the diagnostic performance of our deep learning (DL) model of COVID-19 and investigate whether the diagnostic performance of radiologists was improved by referring to our model. Our datasets contained chest X-rays (CXRs) for the following three categories: normal (NORMAL), non-COVID-19 pneumonia (PNEUMONIA), and COVID-19 pneumonia (COVID). We used two public datasets and private dataset collected from eight hospitals for the development and external validation of our DL model (26,393 CXRs). Eight radiologists performed two reading sessions: one session was performed with reference to CXRs only, and the other was performed with reference to both CXRs and the results of the DL model. The evaluation metrics for the reading session were accuracy, sensitivity, specificity, and area under the curve (AUC). The accuracy of our DL model was 0.733, and that of the eight radiologists without DL was 0.696 ± 0.031. There was a significant difference in AUC between the radiologists with and without DL for COVID versus NORMAL or PNEUMONIA (p = 0.0038). Our DL model alone showed better diagnostic performance than that of most radiologists. In addition, our model significantly improved the diagnostic performance of radiologists for COVID versus NORMAL or PNEUMONIA.Corresponding, Oct. 2023, Scientific reports, 13(1) (1), 17533 - 17533, English, International magazine[Refereed]Scientific journal
- Purpose The purpose of this study is to compare two libraries dedicated to the Markov chain Monte Carlo method: pystan and numpyro. In the comparison, we mainly focused on the agreement of estimated latent parameters and the performance of sampling using the Markov chain Monte Carlo method in Bayesian item response theory (IRT). Materials and methods Bayesian 1PL-IRT and 2PL-IRT were implemented with pystan and numpyro. Then, the Bayesian 1PL-IRT and 2PL-IRT were applied to two types of medical data obtained from a published article. The same prior distributions of latent parameters were used in both pystan and numpyro. Estimation results of latent parameters of 1PL-IRT and 2PL-IRT were compared between pystan and numpyro. Additionally, the computational cost of the Markov chain Monte Carlo method was compared between the two libraries. To evaluate the computational cost of IRT models, simulation data were generated from the medical data and numpyro. Results For all the combinations of IRT types (1PL-IRT or 2PL-IRT) and medical data types, the mean and standard deviation of the estimated latent parameters were in good agreement between pystan and numpyro. In most cases, the sampling time using the Markov chain Monte Carlo method was shorter in numpyro than that in pystan. When the large-sized simulation data were used, numpyro with a graphics processing unit was useful for reducing the sampling time. Conclusion Numpyro and pystan were useful for applying the Bayesian 1PL-IRT and 2PL-IRT. Our results show that the two libraries yielded similar estimation result and that regarding to sampling time, the fastest libraries differed based on the dataset size.PeerJ, Oct. 2023, PeerJ Computer Science, 9, e1620 - e1620, English, International magazine[Refereed]Scientific journal
- PURPOSE: The purpose of this study was to develop artificial intelligence algorithms that can distinguish between orbital and conjunctival mucosa-associated lymphoid tissue (MALT) lymphomas in pathological images. METHODS: Tissue blocks with residual MALT lymphoma and data from histological and flow cytometric studies and molecular genetic analyses such as gene rearrangement were procured for 129 patients treated between April 2008 and April 2020. We collected pathological hematoxylin and eosin-stained (HE) images of lymphoma from these patients and cropped 10 different image patches at a resolution of 2048 × 2048 from pathological images from each patient. A total of 990 images from 99 patients were used to create and evaluate machine-learning models. Each image patch of three different magnification rates at ×4, ×20, and ×40 underwent texture analysis to extract features, and then seven different machine-learning algorithms were applied to the results to create models. Cross-validation on a patient-by-patient basis was used to create and evaluate models, and then 300 images from the remaining 30 cases were used to evaluate the average accuracy rate. RESULTS: Ten-fold cross-validation using the support vector machine with linear kernel algorithm was identified as the best algorithm for discriminating between conjunctival mucosa-associated lymphoid tissue and orbital MALT lymphomas, with an average accuracy rate under cross-validation of 85%. There were ×20 magnification HE images that were more accurate in distinguishing orbital and conjunctival MALT lymphomas among ×4, ×20, and ×40. CONCLUSION: Artificial intelligence algorithms can successfully distinguish HE images between orbital and conjunctival MALT lymphomas.Aug. 2023, Current eye research, 48(12) (12), 1 - 8, English, International magazine[Refereed]Scientific journal
- Association for Computational Linguistics, Jul. 2023, ACL2023, The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, 50 - 61[Refereed]International conference proceedings
- We aimed to develop and evaluate an automatic prediction system for grading histopathological images of prostate cancer. A total of 10,616 whole slide images (WSIs) of prostate tissue were used in this study. The WSIs from one institution (5160 WSIs) were used as the development set, while those from the other institution (5456 WSIs) were used as the unseen test set. Label distribution learning (LDL) was used to address a difference in label characteristics between the development and test sets. A combination of EfficientNet (a deep learning model) and LDL was utilized to develop an automatic prediction system. Quadratic weighted kappa (QWK) and accuracy in the test set were used as the evaluation metrics. The QWK and accuracy were compared between systems with and without LDL to evaluate the usefulness of LDL in system development. The QWK and accuracy were 0.364 and 0.407 in the systems with LDL and 0.240 and 0.247 in those without LDL, respectively. Thus, LDL improved the diagnostic performance of the automatic prediction system for the grading of histopathological images for cancer. By handling the difference in label characteristics using LDL, the diagnostic performance of the automatic prediction system could be improved for prostate cancer grading.MDPI AG, Feb. 2023, Cancers, 15(5) (5), 1535 - 1535, English, International magazine[Refereed]Scientific journal
- This study aimed to develop a versatile automatic segmentation model of bladder cancer (BC) on MRI using a convolutional neural network and investigate the robustness of radiomics features automatically extracted from apparent diffusion coefficient (ADC) maps. This two-center retrospective study used multi-vendor MR units and included 170 patients with BC, of whom 140 were assigned to training datasets for the modified U-net model with five-fold cross-validation and 30 to test datasets for assessment of segmentation performance and reproducibility of automatically extracted radiomics features. For model input data, diffusion-weighted images with b = 0 and 1000 s/mm2, ADC maps, and multi-sequence images (b0-b1000-ADC maps) were used. Segmentation accuracy was compared between ours and existing models. The reproducibility of radiomics features on ADC maps was evaluated using intraclass correlation coefficient. The model with multi-sequence images achieved the highest Dice similarity coefficient (DSC) with five-fold cross-validation (mean DSC = 0.83 and 0.79 for the training and validation datasets, respectively). The median (interquartile range) DSC of the test dataset model was 0.81 (0.70-0.88). Radiomics features extracted from manually and automatically segmented BC exhibited good reproducibility. Thus, our U-net model performed highly accurate segmentation of BC, and radiomics features extracted from the automatic segmentation results exhibited high reproducibility.Jan. 2023, Scientific reports, 13(1) (1), 628 - 628, English, International magazine[Refereed]Scientific journal
- (一社)日本核医学会, 2023, 核医学, 60(Suppl.) (Suppl.), S184 - S184, Japanese胸部PET/MRIの減弱補正 高速Zero-TE MRIを用いた深層学習によるノイズ除去および擬似CT生成
- (一社)日本核医学会, 2023, 核医学, 60(Suppl.) (Suppl.), S206 - S206, JapaneseZTE MRIから2.5次元法深層学習で生成した骨要素を含む減弱補正が胸部領域のSUVに与える影響
- Jan. 2023, Magnetic resonance imaging, 95, 119 - 120, English, International magazine
- PURPOSE: This study proposes a Bayesian multidimensional nominal response model (MD-NRM) to statistically analyze the nominal response of multiclass classifications. MATERIALS AND METHODS: First, for MD-NRM, we extended the conventional nominal response model to achieve stable convergence of the Bayesian nominal response model and utilized multidimensional ability parameters. We then applied MD-NRM to a 3-class classification problem, where radiologists visually evaluated chest X-ray images and selected their diagnosis from one of the three classes. The classification problem consisted of 150 cases, and each of the six radiologists selected their diagnosis based on a visual evaluation of the images. Consequently, 900 (= 150 × 6) nominal responses were obtained. In MD-NRM, we assumed that the responses were determined by the softmax function, the ability of radiologists, and the difficulty of images. In addition, we assumed that the multidimensional ability of one radiologist were represented by a 3 × 3 matrix. The latent parameters of the MD-NRM (ability parameters of radiologists and difficulty parameters of images) were estimated from the 900 responses. To implement Bayesian MD-NRM and estimate the latent parameters, a probabilistic programming language (Stan, version 2.21.0) was used. RESULTS: For all parameters, the Rhat values were less than 1.10. This indicates that the latent parameters of the MD-NRM converged successfully. CONCLUSION: The results show that it is possible to estimate the latent parameters (ability and difficulty parameters) of the MD-NRM using Stan. Our code for the implementation of the MD-NRM is available as open source.Dec. 2022, Japanese journal of radiology, 41(4) (4), 449 - 455, English, Domestic magazine[Refereed][Invited]Scientific journal
- (一社)日本医療情報学会, Nov. 2022, 医療情報学連合大会論文集, 42回, 1155 - 1158, Japanese症例報告を対象とした固有表現抽出手法の比較
- OBJECTIVES: To develop and evaluate a deep learning-based algorithm (DLA) for automatic detection of bone metastases on CT. METHODS: This retrospective study included CT scans acquired at a single institution between 2009 and 2019. Positive scans with bone metastases and negative scans without bone metastasis were collected to train the DLA. Another 50 positive and 50 negative scans were collected separately from the training dataset and were divided into validation and test datasets at a 2:3 ratio. The clinical efficacy of the DLA was evaluated in an observer study with board-certified radiologists. Jackknife alternative free-response receiver operating characteristic analysis was used to evaluate observer performance. RESULTS: A total of 269 positive scans including 1375 bone metastases and 463 negative scans were collected for the training dataset. The number of lesions identified in the validation and test datasets was 49 and 75, respectively. The DLA achieved a sensitivity of 89.8% (44 of 49) with 0.775 false positives per case for the validation dataset and 82.7% (62 of 75) with 0.617 false positives per case for the test dataset. With the DLA, the overall performance of nine radiologists with reference to the weighted alternative free-response receiver operating characteristic figure of merit improved from 0.746 to 0.899 (p < .001). Furthermore, the mean interpretation time per case decreased from 168 to 85 s (p = .004). CONCLUSION: With the aid of the algorithm, the overall performance of radiologists in bone metastases detection improved, and the interpretation time decreased at the same time. KEY POINTS: • A deep learning-based algorithm for automatic detection of bone metastases on CT was developed. • In the observer study, overall performance of radiologists in bone metastases detection improved significantly with the aid of the algorithm. • Radiologists' interpretation time decreased at the same time.Nov. 2022, European radiology, 32(11) (11), 7976 - 7987, English, International magazine[Refereed]Scientific journal
- Jul. 2022, Radiology, 305(2) (2), 221398 - 221398, English, International magazine
- The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner simultaneously acquires metabolic information via PET and morphological information using MRI. However, attenuation correction, which is necessary for quantitative PET evaluation, is difficult as it requires the generation of attenuation-correction maps from MRI, which has no direct relationship with the gamma-ray attenuation information. MRI-based bone tissue segmentation is potentially available for attenuation correction in relatively rigid and fixed organs such as the head and pelvis regions. However, this is challenging for the chest region because of respiratory and cardiac motions in the chest, its anatomically complicated structure, and the thin bone cortex. We propose a new method using unsupervised generative attentional networks with adaptive layer-instance normalisation for image-to-image translation (U-GAT-IT), which specialised in unpaired image transformation based on attention maps for image transformation. We added the modality-independent neighbourhood descriptor (MIND) to the loss of U-GAT-IT to guarantee anatomical consistency in the image transformation between different domains. Our proposed method obtained a synthesised computed tomography of the chest. Experimental results showed that our method outperforms current approaches. The study findings suggest the possibility of synthesising clinically acceptable computed tomography images from chest MRI with minimal changes in anatomical structures without human annotation.Jun. 2022, Scientific reports, 12(1) (1), 11090 - 11090, English, International magazine[Refereed]Scientific journal
- Qeios Ltd, Jun. 2022[Invited]Research society
- Springer Science and Business Media LLC, Jun. 2022, International Journal of Computer Assisted Radiology and Surgery, 17(S1) (S1), S110 - S111[Refereed]International conference proceedings
- Jun. 2022, Proceedings of the 16th NTCIR Conference on Evaluation of Information Access Technologies, 316 - 321, EnglishLeveraging Token-Based Concept Information and Data Augmentation in Few-Resource NER: ZuKyo-EN at the NTCIR-16 Real-MedNLP task[Refereed]International conference proceedings
- Jun. 2022, Proceedings of the 16th NTCIR Conference on Evaluation of Information Access Technologies, 322 - 329, EnglishApproach for Named Entity Recognition and Case Identification Implemented by ZuKyo-JA Sub-team at the NTCIR-16 Real-MedNLP Task[Refereed]International conference proceedings
- This retrospective study aimed to develop and validate a deep learning model for the classification of coronavirus disease-2019 (COVID-19) pneumonia, non-COVID-19 pneumonia, and the healthy using chest X-ray (CXR) images. One private and two public datasets of CXR images were included. The private dataset included CXR from six hospitals. A total of 14,258 and 11,253 CXR images were included in the 2 public datasets and 455 in the private dataset. A deep learning model based on EfficientNet with noisy student was constructed using the three datasets. The test set of 150 CXR images in the private dataset were evaluated by the deep learning model and six radiologists. Three-category classification accuracy and class-wise area under the curve (AUC) for each of the COVID-19 pneumonia, non-COVID-19 pneumonia, and healthy were calculated. Consensus of the six radiologists was used for calculating class-wise AUC. The three-category classification accuracy of our model was 0.8667, and those of the six radiologists ranged from 0.5667 to 0.7733. For our model and the consensus of the six radiologists, the class-wise AUC of the healthy, non-COVID-19 pneumonia, and COVID-19 pneumonia were 0.9912, 0.9492, and 0.9752 and 0.9656, 0.8654, and 0.8740, respectively. Difference of the class-wise AUC between our model and the consensus of the six radiologists was statistically significant for COVID-19 pneumonia (p value = 0.001334). Thus, an accurate model of deep learning for the three-category classification could be constructed; the diagnostic performance of our model was significantly better than that of the consensus interpretation by the six radiologists for COVID-19 pneumonia.May 2022, Scientific reports, 12(1) (1), 8214 - 8214, English, International magazine[Refereed]Scientific journal
- ACM, Mar. 2022, 2022 4th International Conference on Intelligent Medicine and Image Processing, 58 - 62International conference proceedings
- (公社)日本医学放射線学会, Mar. 2022, 日本医学放射線学会学術集会抄録集, 81回, S232 - S232, English深層学習を用いた肺結節の三次元CT画像の生成(Generation of Three-Dimensional CT Images of Lung Nodules using Deep Learning)
- 2022, 電子情報通信学会技術研究報告(Web), 121(347(MI2021 42-89)) (347(MI2021 42-89)), EnglishInvestigation of post implementation training for medical image reading support systemsInternational conference proceedings
- PURPOSE: To evaluate radiomic machine learning (ML) classifiers based on multiparametric magnetic resonance images (MRI) in pretreatment assessment of endometrial cancer (EC) risk factors and to examine effects on radiologists' interpretation of deep myometrial invasion (dMI). METHODS: This retrospective study examined 200 consecutive patients with EC during January 2004 -March 2017, divided randomly to Discovery (n = 150) and Test (n = 50) datasets. Radiomic features of tumors were extracted from T2-weighted images, apparent diffusion coefficient map, and contrast enhanced T1-weighed images. Using the Discovery dataset, feature selection and hyperparameter tuning for XGBoost were performed. Ten classifiers were built to predict dMI, histological grade, lymphovascular invasion (LVI), and pelvic/paraaortic lymph node metastasis (PLNM/PALNM), respectively. Using the Test dataset, the diagnostic performances of ten classifiers were assessed by the area under the receiver operator characteristic curve (AUC). Next, four radiologists assessed dMI independently using MRI with a Likert scale before and after referring to inference of the ML classifier for the Test dataset. Then, AUCs obtained before and after reference were compared. RESULTS: In the Test dataset, mean AUC of ML classifiers for dMI, histological grade, LVI, PLNM, and PALNM were 0.83, 0.77, 0.81, 0.72, and 0.82. AUCs of all radiologists for dMI (0.83-0.88) were better than or equal to mean AUC of the ML classifier, which showed no statistically significant difference before and after the reference. CONCLUSION: Radiomic classifiers showed promise for pretreatment assessment of EC risk factors. Radiologists' inferences outperformed the ML classifier for dMI and showed no improvement by review.Jan. 2022, Magnetic resonance imaging, 85, 161 - 167, English, International magazine[Refereed]Scientific journal
- Many recent studies on medical image processing have involved the use of machine learning (ML) and deep learning (DL) [...]MDPI AG, Dec. 2021, Applied Sciences, 11(23) (23), 11483 - 11483[Invited]Scientific journal
- Qeios Ltd, Nov. 2021[Invited]Research society
- To determine whether temporal subtraction (TS) CT obtained with non-rigid image registration improves detection of various bone metastases during serial clinical follow-up examinations by numerous radiologists. Six board-certified radiologists retrospectively scrutinized CT images for patients with history of malignancy sequentially. These radiologists selected 50 positive and 50 negative subjects with and without bone metastases, respectively. Furthermore, for each subject, they selected a pair of previous and current CT images satisfying predefined criteria by consensus. Previous images were non-rigidly transformed to match current images and subtracted from current images to automatically generate TS images. Subsequently, 18 radiologists independently interpreted the 100 CT image pairs to identify bone metastases, both without and with TS images, with each interpretation separated from the other by an interval of at least 30 days. Jackknife free-response receiver operating characteristics (JAFROC) analysis was conducted to assess observer performance. Compared with interpretation without TS images, interpretation with TS images was associated with a significantly higher mean figure of merit (0.710 vs. 0.658; JAFROC analysis, P = 0.0027). Mean sensitivity at lesion-based was significantly higher for interpretation with TS compared with that without TS (46.1% vs. 33.9%; P = 0.003). Mean false positive count per subject was also significantly higher for interpretation with TS than for that without TS (0.28 vs. 0.15; P < 0.001). At the subject-based, mean sensitivity was significantly higher for interpretation with TS images than that without TS images (73.2% vs. 65.4%; P = 0.003). There was no significant difference in mean specificity (0.93 vs. 0.95; P = 0.083). TS significantly improved overall performance in the detection of various bone metastases.Sep. 2021, Scientific reports, 11(1) (1), 18422 - 18422, English, International magazine[Refereed]Scientific journal
- Frontiers Media SA, Jul. 2021, Frontiers in artificial intelligence, 4, 694815 - 694815, English, International magazine
Purpose: The purpose of this study was to develop and evaluate lung cancer segmentation with a pretrained model and transfer learning. The pretrained model was constructed from an artificial dataset generated using a generative adversarial network (GAN).Materials and Methods: Three public datasets containing images of lung nodules/lung cancers were used: LUNA16 dataset, Decathlon lung dataset, and NSCLC radiogenomics. The LUNA16 dataset was used to generate an artificial dataset for lung cancer segmentation with the help of the GAN and 3D graph cut. Pretrained models were then constructed from the artificial dataset. Subsequently, the main segmentation model was constructed from the pretrained models and the Decathlon lung dataset. Finally, the NSCLC radiogenomics dataset was used to evaluate the main segmentation model. The Dice similarity coefficient (DSC) was used as a metric to evaluate the segmentation performance.Results: The mean DSC for the NSCLC radiogenomics dataset improved overall when using the pretrained models. At maximum, the mean DSC was 0.09 higher with the pretrained model than that without it.Conclusion: The proposed method comprising an artificial dataset and a pretrained model can improve lung cancer segmentation as confirmed in terms of the DSC metric. Moreover, the construction of the artificial dataset for the segmentation using the GAN and 3D graph cut was found to be feasible.[Refereed]Scientific journal - Endometrial cancer (EC) is the most common gynecological tumor in developed countries, and preoperative risk stratification is essential for personalized medicine. There have been several radiomics studies for noninvasive risk stratification of EC using MRI. Although tumor segmentation is usually necessary for these studies, manual segmentation is not only labor-intensive but may also be subjective. Therefore, our study aimed to perform the automatic segmentation of EC on MRI with a convolutional neural network. The effect of the input image sequence and batch size on the segmentation performance was also investigated. Of 200 patients with EC, 180 patients were used for training the modified U-net model; 20 patients for testing the segmentation performance and the robustness of automatically extracted radiomics features. Using multi-sequence images and larger batch size was effective for improving segmentation accuracy. The mean Dice similarity coefficient, sensitivity, and positive predictive value of our model for the test set were 0.806, 0.816, and 0.834, respectively. The robustness of automatically extracted first-order and shape-based features was high (median ICC = 0.86 and 0.96, respectively). Other high-order features presented moderate-high robustness (median ICC = 0.57-0.93). Our model could automatically segment EC on MRI and extract radiomics features with high reliability.Corresponding, Jul. 2021, Scientific reports, 11(1) (1), 14440 - 14440, English, International magazine[Refereed]Scientific journal
- This paper reviews the application of deep learning model to the automatic diagnosis of COVID-19 on chest x- ray images. Among various deep learning models proposed for the automatic diagnosis of COVID-19, COVID-NET, CV19- Net, and the authors' model are introduced. The source code is publicly available for these three models, and the datasets of chest x-ray images are also available for two of them. It is expected that these publicly available source codes and datasets of diagnostic models will be useful for the research on diagnostic models of COVID-19 and other diseases. Finally, future work on the author's diagnostic model for COVID-19 is presented.MEDICAL IMAGING AND INFORMATION SCIENCES, Jul. 2021, Medical Imaging and Information Sciences, 38(2) (2), 53 - 56, Japanese[Refereed][Invited]
- Jun. 2021, International Journal of Imaging Systems and Technology, 31(2) (2), 1002 - 1008[Refereed]Scientific journal
- OBJECTIVES: To evaluate a deep learning model for predicting gestational age from fetal brain MRI acquired after the first trimester in comparison to biparietal diameter (BPD). MATERIALS AND METHODS: Our Institutional Review Board approved this retrospective study, and a total of 184 T2-weighted MRI acquisitions from 184 fetuses (mean gestational age: 29.4 weeks) who underwent MRI between January 2014 and June 2019 were included. The reference standard gestational age was based on the last menstruation and ultrasonography measurements in the first trimester. The deep learning model was trained with T2-weighted images from 126 training cases and 29 validation cases. The remaining 29 cases were used as test data, with fetal age estimated by both the model and BPD measurement. The relationship between the estimated gestational age and the reference standard was evaluated with Lin's concordance correlation coefficient (ρc) and a Bland-Altman plot. The ρc was assessed with McBride's definition. RESULTS: The ρc of the model prediction was substantial (ρc = 0.964), but the ρc of the BPD prediction was moderate (ρc = 0.920). Both the model and BPD predictions had greater differences from the reference standard at increasing gestational age. However, the upper limit of the model's prediction (2.45 weeks) was significantly shorter than that of BPD (5.62 weeks). CONCLUSIONS: Deep learning can accurately predict gestational age from fetal brain MR acquired after the first trimester. KEY POINTS: • The prediction of gestational age using ultrasound is accurate in the first trimester but becomes inaccurate as gestational age increases. • Deep learning can accurately predict gestational age from fetal brain MRI acquired in the second and third trimester. • Prediction of gestational age by deep learning may have benefits for prenatal care in pregnancies that are underserved during the first trimester.Jun. 2021, European radiology, 31(6) (6), 3775 - 3782, English, International magazine[Refereed]Scientific journal
- May 2021, European journal of gastroenterology & hepatology, 33(5) (5), 765 - 766, English, International magazine[Refereed]Scientific journal
- The purpose of this study was to develop a computer-aided diagnosis (CAD) system for automatic classification of histopathological images of lung tissues. Two datasets (private and public datasets) were obtained and used for developing and validating CAD. The private dataset consists of 94 histopathological images that were obtained for the following five categories: normal, emphysema, atypical adenomatous hyperplasia, lepidic pattern of adenocarcinoma, and invasive adenocarcinoma. The public dataset consists of 15,000 histopathological images that were obtained for the following three categories: lung adenocarcinoma, lung squamous cell carcinoma, and benign lung tissue. These images were automatically classified using machine learning and two types of image feature extraction: conventional texture analysis (TA) and homology-based image processing (HI). Multiscale analysis was used in the image feature extraction, after which automatic classification was performed using the image features and eight machine learning algorithms. The multicategory accuracy of our CAD system was evaluated in the two datasets. In both the public and private datasets, the CAD system with HI was better than that with TA. It was possible to build an accurate CAD system for lung tissues. HI was more useful for the CAD systems than TA.Mar. 2021, Cancers, 13(6) (6), 1192, English, International magazine[Refereed]Scientific journal
- 2021, Innervision, 36(9) (9)Step up MRI 2021 II MRIにおけるAIの研究開発・臨床応用の最新動向 4.MR画像における子宮体がんの自動セグメンテーション
- 2021, 医療情報学連合大会論文集(CD-ROM), 41stDevelopment of a Duty Schedule Generation System Using Genetic Algorithm in a University Hospital
- 2021, 医療情報学連合大会論文集(CD-ROM), 41stAccurate prediction model of pulmonary artery mean pressure using minimally invasive examinations in chronic thromboembolic pulmonary hypertension patients
- PURPOSE: To retrospectively assess the repeatability of physiological F-18 labeled fluorodeoxyglucose (FDG) uptake in the skin on positron emission tomography/magnetic resonance imaging (PET/MRI) and explore its regional distribution and relationship with sex and age. METHODS: Out of 562 examinations with normal FDG distribution on whole-body PET/MRI, 74 repeated examinations were evaluated to assess the repeatability and regional distribution of physiological skin uptake. Furthermore, 224 examinations were evaluated to compare differences in the uptake due to sex and age. Skin segmentation on PET was performed as body-surface contouring on an MR-based attenuation correction map using an off-line reconstruction software. Bland-Altman plots were created for the repeatability assessment. Kruskal-Wallis test was performed to compare the maximum standardized uptake value (SUVmax) with regional distribution, age, and sex. RESULTS: The limits of agreement for the difference in SUVmean and SUVmax of the skin were less than 30%. The highest SUVmax was observed in the face (3.09±1.04), followed by the scalp (2.07±0.53). The SUVmax in the face of boys aged 0-9 years and 10-20 years (1.33±0.64 and 2.05±1.00, respectively) and girls aged 0-9 years (0.98±0.38) was significantly lower than that of men aged ≥20 years and girls aged ≥10 years (p<0.001). In women, the SUVmax of the face (2.31±0.71) of ≥70-year-olds was significantly lower than that of 30-39-year-olds (3.83±0.82) (p<0.05). CONCLUSION: PET/MRI enabled the quantitative analysis of skin FDG uptake with repeatability. The degree of physiological FDG uptake in the skin was the highest in the face and varied between sexes. Although attention to differences in body habitus between age groups is needed, skin FDG uptake also depended on age.2021, PloS one, 16(3) (3), e0249304, English, International magazine[Refereed]Scientific journal
- Temporal subtraction (TS) technique calculates a subtraction image between a pair of registered images acquired from the same patient at different times. Previous studies have shown that TS is effective for visualizing pathological changes over time; therefore, TS should be a useful tool for radiologists. However, artifacts caused by partial volume effects degrade the quality of thick-slice subtraction images, even with accurate image registration. Here, we propose a subtraction method for reducing artifacts in thick-slice images and discuss its implementation in high-speed processing. The proposed method is based on voxel matching, which reduces artifacts by considering gaps in discretized positions of two images in subtraction calculations. There are two different features between the proposed method and conventional voxel matching: (1) the size of a searching region to reduce artifacts is determined based on discretized position gaps between images and (2) the searching region is set on both images for symmetrical subtraction. The proposed method is implemented by adopting an accelerated subtraction calculation method that exploit the nature of liner interpolation for calculating the signal value at a point among discretized positions. We quantitatively evaluated the proposed method using synthetic data and qualitatively using clinical data interpreted by radiologists. The evaluation showed that the proposed method was superior to conventional methods. Moreover, the processing speed using the proposed method was almost unchanged from that of the conventional methods. The results indicate that the proposed method can improve the quality of subtraction images acquired from thick-slice images.Dec. 2020, Journal of digital imaging, 33(6) (6), 1543 - 1553, English, International magazine[Refereed]Scientific journal
- We hypothesized that, in discrimination between benign and malignant parotid gland tumors, high diagnostic accuracy could be obtained with a small amount of imbalanced data when anomaly detection (AD) was combined with deep leaning (DL) model and the L2-constrained softmax loss. The purpose of this study was to evaluate whether the proposed method was more accurate than other commonly used DL or AD methods. Magnetic resonance (MR) images of 245 parotid tumors (22.5% malignant) were retrospectively collected. We evaluated the diagnostic accuracy of the proposed method (VGG16-based DL and AD) and that of classification models using conventional DL and AD methods. A radiologist also evaluated the MR images. ROC and precision-recall (PR) analyses were performed, and the area under the curve (AUC) was calculated. In terms of diagnostic performance, the VGG16-based model with the L2-constrained softmax loss and AD (local outlier factor) outperformed conventional DL and AD methods and a radiologist (ROC-AUC = 0.86 and PR-ROC = 0.77). The proposed method could discriminate between benign and malignant parotid tumors in MR images even when only a small amount of data with imbalanced distribution is available.Nov. 2020, Scientific reports, 10(1) (1), 19388 - 19388, English, International magazine[Refereed]Scientific journal
- PURPOSE: To develop and evaluate a three-dimensional (3D) generative model of computed tomography (CT) images of lung nodules using a generative adversarial network (GAN). To guide the GAN, lung nodule size was used. MATERIALS AND METHODS: A public CT dataset of lung nodules was used, from where 1182 lung nodules were obtained. Our proposed GAN model used masked 3D CT images and nodule size information to generate images. To evaluate the generated CT images, two radiologists visually evaluated whether the CT images with lung nodule were true or generated, and the diagnostic ability was evaluated using receiver-operating characteristic analysis and area under the curves (AUC). Then, two models for classifying nodule size into five categories were trained, one using the true and the other using the generated CT images of lung nodules. Using true CT images, the classification accuracy of the sizes of the true lung nodules was calculated for the two classification models. RESULTS: The sensitivity, specificity, and AUC of the two radiologists were respectively as follows: radiologist 1: 81.3%, 37.7%, and 0.592; radiologist 2: 77.1%, 30.2%, and 0.597. For categorization of nodule size, the mean accuracy of the classification model constructed with true CT images was 85% (range 83.2-86.1%), and that with generated CT images was 85% (range 82.2-88.1%). CONCLUSIONS: Our results show that it was possible to generate 3D CT images of lung nodules that could be used to construct a classification model of lung nodule size without true CT images.Nov. 2020, Computers in biology and medicine, 126, 104032 - 104032, English, International magazine[Refereed]Scientific journal
- BACKGROUND AND OBJECTIVE: Currently, it is challenging to detect acute ischemic stroke (AIS)-related changes on computed tomography (CT) images. Therefore, we aimed to develop and evaluate an automatic AIS detection system involving a two-stage deep learning model. METHODS: We included 238 cases from two different institutions. AIS-related findings were annotated on each of the 238 sets of head CT images by referring to head magnetic resonance imaging (MRI) images in which an MRI examination was performed within 24 h following the CT scan. These 238 annotated cases were divided into a training set including 189 cases and test set including 49 cases. Subsequently, a two-stage deep learning detection model was constructed from the training set using the You Only Look Once v3 model and Visual Geometry Group 16 classification model. Then, the two-stage model performed the AIS detection process in the test set. To assess the detection model's results, a board-certified radiologist also evaluated the test set head CT images with and without the aid of the detection model. The sensitivity of AIS detection and number of false positives were calculated for the evaluation of the test set detection results. The sensitivity of the radiologist with and without the software detection results was compared using the McNemar test. A p-value of less than 0.05 was considered statistically significant. RESULTS: For the two-stage model and radiologist without and with the use of the software results, the sensitivity was 37.3%, 33.3%, and 41.3%, respectively, and the number of false positives per one case was 1.265, 0.327, and 0.388, respectively. On using the two-stage detection model's results, the board-certified radiologist's detection sensitivity significantly improved (p-value = 0.0313). CONCLUSIONS: Our detection system involving the two-stage deep learning model significantly improved the radiologist's sensitivity in AIS detection.Nov. 2020, Computer methods and programs in biomedicine, 196, 105711 - 105711, English, International magazine[Refereed]Scientific journal
- This study aimed to develop and validate computer-aided diagnosis (CXDx) system for classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray (CXR) images. From two public datasets, 1248 CXR images were obtained, which included 215, 533, and 500 CXR images of COVID-19 pneumonia patients, non-COVID-19 pneumonia patients, and the healthy samples, respectively. The proposed CADx system utilized VGG16 as a pre-trained model and combination of conventional method and mixup as data augmentation methods. Other types of pre-trained models were compared with the VGG16-based model. Single type or no data augmentation methods were also evaluated. Splitting of training/validation/test sets was used when building and evaluating the CADx system. Three-category accuracy was evaluated for test set with 125 CXR images. The three-category accuracy of the CAD system was 83.6% between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy. Sensitivity for COVID-19 pneumonia was more than 90%. The combination of conventional method and mixup was more useful than single type or no data augmentation method. In conclusion, this study was able to create an accurate CADx system for the 3-category classification. Source code of our CADx system is available as open source for COVID-19 research.Oct. 2020, Scientific reports, 10(1) (1), 17532 - 17532, English, International magazine[Refereed]Scientific journal
- LEVEL OF EVIDENCE: 4 TECHNICAL EFFICACY STAGE: 4 J. Magn. Reson. Imaging 2020;52:1263-1264.Oct. 2020, Journal of magnetic resonance imaging : JMRI, 52(4) (4), 1263 - 1264, English, International magazine[Refereed]
- The usefulness of sparse-sampling CT with deep learning-based reconstruction for detection of metastasis of malignant ovarian tumors was evaluated. We obtained contrast-enhanced CT images (n = 141) of ovarian cancers from a public database, whose images were randomly divided into 71 training, 20 validation, and 50 test cases. Sparse-sampling CT images were calculated slice-by-slice by software simulation. Two deep-learning models for deep learning-based reconstruction were evaluated: Residual Encoder-Decoder Convolutional Neural Network (RED-CNN) and deeper U-net. For 50 test cases, we evaluated the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as quantitative measures. Two radiologists independently performed a qualitative evaluation for the following points: entire CT image quality; visibility of the iliac artery; and visibility of peritoneal dissemination, liver metastasis, and lymph node metastasis. Wilcoxon signed-rank test and McNemar test were used to compare image quality and metastasis detectability between the two models, respectively. The mean PSNR and SSIM performed better with deeper U-net over RED-CNN. For all items of the visual evaluation, deeper U-net scored significantly better than RED-CNN. The metastasis detectability with deeper U-net was more than 95%. Sparse-sampling CT with deep learning-based reconstruction proved useful in detecting metastasis of malignant ovarian tumors and might contribute to reducing overall CT-radiation exposure.Corresponding, MDPI AG, Jun. 2020, Applied Sciences, 10(13) (13), 4446 - 4446[Refereed]Scientific journal
- BACKGROUND: The purpose of this study was to develop and evaluate an algorithm for bone segmentation on whole-body CT using a convolutional neural network (CNN). METHODS: Bone segmentation was performed using a network based on U-Net architecture. To evaluate its performance and robustness, we prepared three different datasets: (1) an in-house dataset comprising 16,218 slices of CT images from 32 scans in 16 patients; (2) a secondary dataset comprising 12,529 slices of CT images from 20 scans in 20 patients, which were collected from The Cancer Imaging Archive; and (3) a publicly available labelled dataset comprising 270 slices of CT images from 27 scans in 20 patients. To improve the network's performance and robustness, we evaluated the efficacy of three types of data augmentation technique: conventional method, mixup, and random image cropping and patching (RICAP). RESULTS: The network trained on the in-house dataset achieved a mean Dice coefficient of 0.983 ± 0.005 on cross validation with the in-house dataset, and 0.943 ± 0.007 with the secondary dataset. The network trained on the public dataset achieved a mean Dice coefficient of 0.947 ± 0.013 on 10 randomly generated 15-3-9 splits of the public dataset. These results outperform those reported previously. Regarding augmentation technique, the conventional method, RICAP, and a combination of these were effective. CONCLUSIONS: The CNN-based model achieved accurate bone segmentation on whole-body CT, with generalizability to various scan conditions. Data augmentation techniques enabled construction of an accurate and robust model even with a small dataset.Jun. 2020, Computers in biology and medicine, 121, 103767 - 103767, English, International magazine[Refereed]Scientific journal
- 金原出版, May 2020, 産婦人科の実際 = Obstetrical and gynecological practice, 69(5) (5), 469 - 474, JapaneseAIによる産婦人科MRIの診断支援 (特集 専門医はもういらない? せまりくるAI時代 : 新型コロナ時代の切り札になるか)
- May 2020, Applied Sciences, 10(10) (10), 3360 - 3360, EnglishAutomatic Pancreas Segmentation Using Coarse-Scaled 2D Model of Deep Learning: Usefulness of Data Augmentation and Deep U-Net[Refereed]Scientific journal
- RATIONALE AND OBJECTIVES: To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction. MATERIALS AND METHODS: This study used 60 anonymized chest CT cases from a public database called "The Cancer Imaging Archive". Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively. RESULTS: The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0-3.5 versus 1.0-1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale). CONCLUSION: Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted.Apr. 2020, Academic radiology, 27(4) (4), 563 - 574, English, International magazine[Refereed]Scientific journal
- Training of a convolutional neural network (CNN) generally requires a large dataset. However, it is not easy to collect a large medical image dataset. The purpose of this study is to investigate the utility of synthetic images in training CNNs and to demonstrate the applicability of unrelated images by domain transformation. Mammograms showing 202 benign and 212 malignant masses were used for evaluation. To create synthetic data, a cycle generative adversarial network was trained with 599 lung nodules in computed tomography (CT) and 1430 breast masses on digitized mammograms (DDSM). A CNN was trained for classification between benign and malignant masses. The classification performance was compared between the networks trained with the original data, augmented data, synthetic data, DDSM images, and natural images (ImageNet dataset). The results were evaluated in terms of the classification accuracy and the area under the receiver operating characteristic curves (AUC). The classification accuracy improved from 65.7% to 67.1% with data augmentation. The use of an ImageNet pretrained model was useful (79.2%). Performance was slightly improved when synthetic images or the DDSM images only were used for pretraining (67.6 and 72.5%, respectively). When the ImageNet pretrained model was trained with the synthetic images, the classification performance slightly improved (81.4%), although the difference in AUCs was not statistically significant. The use of the synthetic images had an effect similar to the DDSM images. The results of the proposed study indicated that the synthetic data generated from unrelated lesions by domain transformation could be used to increase the training samples.Apr. 2020, Computers in biology and medicine, 119, 103698 - 103698, English, International magazine[Refereed]Scientific journal
- RATIONALE AND OBJECTIVES: The purpose of this study was to validate a Bayesian statistical model of item response theory (IRT). IRT was used to evaluate a new modality (temporal subtraction, TS) in observer studies of radiologists, compared with a conventional modality (computed tomography). MATERIALS AND METHODS: From previously published papers, we obtained two datasets of clinical observer studies of radiologists. Those studies used a multi-reader and multi-case paradigm to evaluate radiologists' detection abilities, primarily to determine if TS could enhance the detectability of bone metastasis or brain infarctions. We applied IRT to these studies' datasets using Stan software. Before applying IRT, the radiologists' responses were recorded as binaries for each case (1 = correct, 0 = incorrect). Effect of TS on detectability was evaluated by using our IRT model and calculating the 95% credible interval of the effect. RESULTS: The mean, median, and 95% credible interval of the effect of TS were 0.913, 0.885, and 0.243-1.745 for the bone metastasis detection, and 2.524, 2.50, and 1.827-3.310, for the brain infarction detection. For both detection studies, the 95% credible intervals of the effect of TS did not include zero, indicating that TS significantly improved diagnostic ability. CONCLUSION: Judgments based on the present study results were compatible with the two previous studies. Our study results demonstrated that the Bayesian statistical model of IRT could judge a new modality's usefulness.Mar. 2020, Academic radiology, 27(3) (3), e45-e54, English, International magazine[Refereed]Scientific journal
- OBJECTIVE: To develop and evaluate a radiomics approach for classifying histological subtypes and epidermal growth factor receptor (EGFR) mutation status in lung cancer on PET/CT images. METHODS: PET/CT images of lung cancer patients were obtained from public databases and used to establish two datasets, respectively to classify histological subtypes (156 adenocarcinomas and 32 squamous cell carcinomas) and EGFR mutation status (38 mutant and 100 wild-type samples). Seven types of imaging features were obtained from PET/CT images of lung cancer. Two types of machine learning algorithms were used to predict histological subtypes and EGFR mutation status: random forest (RF) and gradient tree boosting (XGB). The classifiers used either a single type or multiple types of imaging features. In the latter case, the optimal combination of the seven types of imaging features was selected by Bayesian optimization. Receiver operating characteristic analysis, area under the curve (AUC), and tenfold cross validation were used to assess the performance of the approach. RESULTS: In the classification of histological subtypes, the AUC values of the various classifiers were as follows: RF, single type: 0.759; XGB, single type: 0.760; RF, multiple types: 0.720; XGB, multiple types: 0.843. In the classification of EGFR mutation status, the AUC values were: RF, single type: 0.625; XGB, single type: 0.617; RF, multiple types: 0.577; XGB, multiple types: 0.659. CONCLUSIONS: The radiomics approach to PET/CT images, together with XGB and Bayesian optimization, is useful for classifying histological subtypes and EGFR mutation status in lung cancer.Jan. 2020, Annals of nuclear medicine, 34(1) (1), 49 - 57, English, Domestic magazine[Refereed]Scientific journal
- OBJECTIVE: Temporal subtraction of CT (TS) images improves detection of newly developed bone metastases (BM). We sought to determine whether TS improves detection of BM by radiology residents as well. METHODS: We performed an observer study using a previously reported dataset, consisting of 60 oncology patients, each with previous and current CT images. TS images were calculated using in-house software. Four residents independently interpreted twice the 60 sets of CT images, without and with TS. They identified BM by marking suspicious lesions likely to be BM. Lesion-based sensitivity and number of false positives per patient were calculated. Figure-of-merit (FOM) was calculated. Detectability of BM, with and without TS, was compared between radiology residents and board-certified radiologists, as published previously. RESULTS: FOM of residents significantly improved by implementing TS (p value < 0.0001). Lesion-based sensitivity, false positives per patients, and FOM were 40.8%, 0.121, and 0.657, respectively, without TS, and 58.1%, 0.0958, and 0.796, respectively, with TS. These findings were comparable with the previously published values for board-certified radiologists without TS (58.0%, 0.19, and 0.758, respectively). CONCLUSION: The detectability of BM by residents improved markedly by implementing TS and reached that of board-certified radiologists without TS. KEY POINTS: • Detectability of bone metastases on CT by residents improved significantly when using temporal subtraction of CT (TS). • Detections by residents with TS and board-certified radiologists without TS were comparable. • TS is useful for residents as it is for board-certified radiologists.Dec. 2019, European radiology, 29(12) (12), 6439 - 6442, English, International magazine[Refereed]
- BACKGROUND: This study was performed to evaluate the clinical feasibility of a U-net for fully automatic uterine segmentation on MRI by using images of major uterine disorders. METHODS: This study included 122 female patients (14 with uterine endometrial cancer, 15 with uterine cervical cancer, and 55 with uterine leiomyoma). U-net architecture optimized for our research was used for automatic segmentation. Three-fold cross-validation was performed for validation. The results of manual segmentation of the uterus by a radiologist on T2-weighted sagittal images were used as the gold standard. Dice similarity coefficient (DSC) and mean absolute distance (MAD) were used for quantitative evaluation of the automatic segmentation. Visual evaluation using a 4-point scale was performed by two radiologists. DSC, MAD, and the score of the visual evaluation were compared between uteruses with and without uterine disorders. RESULTS: The mean DSC of our model for all patients was 0.82. The mean DSCs for patients with and without uterine disorders were 0.84 and 0.78, respectively (p = 0.19). The mean MADs for patients with and without uterine disorders were 18.5 and 21.4 [pixels], respectively (p = 0.39). The scores of the visual evaluation were not significantly different between uteruses with and without uterine disorders. CONCLUSIONS: Fully automatic uterine segmentation with our modified U-net was clinically feasible. The performance of the segmentation of our model was not influenced by the presence of uterine disorders.Nov. 2019, Computers in biology and medicine, 114, 103438 - 103438, English, International magazine[Refereed]Scientific journal
- OBJECTIVES: To compare observer performance of detecting bone metastases between bone scintigraphy, including planar scan and single-photon emission computed tomography, and computed tomography (CT) temporal subtraction (TS). METHODS: Data on 60 patients with cancer who had undergone CT (previous and current) and bone scintigraphy were collected. Previous CT images were registered to the current ones by large deformation diffeomorphic metric mapping; the registered previous images were subtracted from the current ones to produce TS. Definitive diagnosis of bone metastases was determined by consensus between two radiologists. Twelve readers independently interpreted the following pairs of examinations: NM-pair, previous and current CTs and bone scintigraphy, and TS-pair, previous and current CTs and TS. The readers assigned likelihood levels to suspected bone metastases for diagnosis. Sensitivity, number of false positives per patient (FPP), and reading time for each pair of examinations were analysed for evaluating observer performance by performing the Wilcoxon signed-rank test. Figure-of-merit (FOM) was calculated using jackknife alternative free-response receiver operating characteristic analysis. RESULTS: The sensitivity of TS was significantly higher than that of bone scintigraphy (54.3% vs. 41.3%, p = 0.006). FPP with TS was significantly higher than that with bone scintigraphy (0.189 vs. 0.0722, p = 0.003). FOM of TS tended to be better than that of bone scintigraphy (0.742 vs. 0.691, p = 0.070). CONCLUSION: Sensitivity of TS in detecting bone metastasis was significantly higher than that of bone scintigraphy, but still limited to 54%. TS might be superior to bone scintigraphy for early detection of bone metastasis. KEY POINTS: • Computed tomography temporal subtraction was helpful in early detection of bone metastases. • Sensitivity for bone metastasis was higher for computed tomography temporal subtraction than for bone scintigraphy. • Figure-of-merit of computed tomography temporal subtraction was better than that of bone scintigraphy.Oct. 2019, European radiology, 29(10) (10), 5673 - 5681, English, International magazine[Refereed]Scientific journal
- 日本医用画像工学会, Jul. 2019, 日本医用画像工学会大会予稿集, 38回, 50 - 50, JapaneseGenerative adversarial networkを用いた肺結節の3次元CT画像の生成[Refereed]
- (有)科学評論社, May 2019, 呼吸器内科, 35(5) (5), 461 - 466, Japanese
- OBJECTIVE: To assess whether temporal subtraction (TS) images of brain CT improve the detection of suspected brain infarctions. METHODS: Study protocols were approved by our institutional review board, and informed consent was waived because of the retrospective nature of this study. Forty-two sets of brain CT images of 41 patients, each consisting of a pair of brain CT images scanned at two time points (previous and current) between January 2011 and November 2016, were collected for an observer performance study. The 42 sets consisted of 23 cases with a total of 77 newly developed brain infarcts or hyperdense artery signs confirmed by two radiologists who referred to additional clinical information and 19 negative control cases. To create TS images, the previous images were registered to the current images by partly using a non-rigid registration algorithm and then subtracted. Fourteen radiologists independently interpreted the images to identify the lesions with and without TS images with an interval of over 4 weeks. A figure of merit (FOM) was calculated along with the jackknife alternative free-response receiver-operating characteristic analysis. Sensitivity, number of false positives per case (FPC) and reading time were analyzed by the Wilcoxon signed-rank test. RESULTS: The mean FOM increased from 0.528 to 0.737 with TS images (p < 0.0001). The mean sensitivity and FPC improved from 26.5% and 0.243 to 56.0% and 0.153 (p < 0.0001 and p = 0.239), respectively. The mean reading time was 173 s without TS and 170 s with TS (p = 0.925). CONCLUSION: The detectability of suspected brain infarctions was significantly improved with TS CT images. KEY POINTS: • Although it is established that MRI is superior to CT in the detection of strokes, the first choice of modality for suspected stroke patients is often CT. • An observer performance study with 14 radiologists was performed to evaluate whether temporal subtraction images derived from a non-rigid transformation algorithm can significantly improve the detectability of newly developed brain infarcts on CT. • Temporal subtraction images were shown to significantly improve the detectability of newly developed brain infarcts on CT.Feb. 2019, European radiology, 29(2) (2), 759 - 769, English, International magazine[Refereed]Scientific journal
- 2019, PloS one, 14(1) (1), e0210720[Refereed]
- 2019, CoRR, abs/1908.07704Lung segmentation on chest x-ray images in patients with severe abnormal findings using deep learning.[Refereed]Scientific journal
- Convolutional neural network (CNN), a class of artificial neural networks that has become dominant in various computer vision tasks, is attracting interest across a variety of domains, including radiology. CNN is designed to automatically and adaptively learn spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. This review article offers a perspective on the basic concepts of CNN and its application to various radiological tasks, and discusses its challenges and future directions in the field of radiology. Two challenges in applying CNN to radiological tasks, small dataset and overfitting, will also be covered in this article, as well as techniques to minimize them. Being familiar with the concepts and advantages, as well as limitations, of CNN is essential to leverage its potential in diagnostic radiology, with the goal of augmenting the performance of radiologists and improving patient care. KEY POINTS: • Convolutional neural network is a class of deep learning methods which has become dominant in various computer vision tasks and is attracting interest across a variety of domains, including radiology. • Convolutional neural network is composed of multiple building blocks, such as convolution layers, pooling layers, and fully connected layers, and is designed to automatically and adaptively learn spatial hierarchies of features through a backpropagation algorithm. • Familiarity with the concepts and advantages, as well as limitations, of convolutional neural network is essential to leverage its potential to improve radiologist performance and, eventually, patient care.Aug. 2018, Insights into imaging, 9(4) (4), 611 - 629, English, International magazineScientific journal
- Springer Science and Business Media LLC, Jun. 2018, International Journal of Computer Assisted Radiology and Surgery, 13(S1) (S1), S178 - S179[Refereed]International conference proceedings
- Jun. 2018, Insights into imaging[Refereed]
- Public Library of Science, Feb. 2018, PLoS ONE, 13(2) (2), e0192892, English[Refereed]Scientific journal
- We developed a computer-aided diagnosis (CADx) method for classification between benign nodule, primary lung cancer, and metastatic lung cancer and evaluated the following: (i) the usefulness of the deep convolutional neural network (DCNN) for CADx of the ternary classification, compared with a conventional method (hand-crafted imaging feature plus machine learning), (ii) the effectiveness of transfer learning, and (iii) the effect of image size as the DCNN input. Among 1240 patients of previously-built database, computed tomography images and clinical information of 1236 patients were included. For the conventional method, CADx was performed by using rotation-invariant uniform-pattern local binary pattern on three orthogonal planes with a support vector machine. For the DCNN method, CADx was evaluated using the VGG-16 convolutional neural network with and without transfer learning, and hyperparameter optimization of the DCNN method was performed by random search. The best averaged validation accuracies of CADx were 55.9%, 68.0%, and 62.4% for the conventional method, the DCNN method with transfer learning, and the DCNN method without transfer learning, respectively. For image size of 56, 112, and 224, the best averaged validation accuracy for the DCNN with transfer learning were 60.7%, 64.7%, and 68.0%, respectively. DCNN was better than the conventional method for CADx, and the accuracy of DCNN improved when using transfer learning. Also, we found that larger image sizes as inputs to DCNN improved the accuracy of lung nodule classification.2018, PloS one, 13(7) (7), e0200721, English, International magazine[Refereed]Scientific journal
- 2018, 生体医工学, 56(4) (4)Construction and Evaluation of Clinical Information Export Environment Complying with the Amended Act on the Protection of Personal Information
- We aimed to describe the development of an inference model for computer-aided diagnosis of lung nodules that could provide valid reasoning for any inferences, thereby improving the interpretability and performance of the system. An automatic construction method was used that considered explanation adequacy and inference accuracy. In addition, we evaluated the usefulness of prior experts' (radiologists') knowledge while constructing the models. In total, 179 patients with lung nodules were included and divided into 79 and 100 cases for training and test data, respectively. F-measure and accuracy were used to assess explanation adequacy and inference accuracy, respectively. For F-measure, reasons were defined as proper subsets of Evidence that had a strong influence on the inference result. The inference models were automatically constructed using the Bayesian network and Markov chain Monte Carlo methods, selecting only those models that met the predefined criteria. During model constructions, we examined the effect of including radiologist's knowledge in the initial Bayesian network models. Performance of the best models in terms of F-measure, accuracy, and evaluation metric were as follows: 0.411, 72.0%, and 0.566, respectively, with prior knowledge, and 0.274, 65.0%, and 0.462, respectively, without prior knowledge. The best models with prior knowledge were then subjectively and independently evaluated by two radiologists using a 5-point scale, with 5, 3, and 1 representing beneficial, appropriate, and detrimental, respectively. The average scores by the two radiologists were 3.97 and 3.76 for the test data, indicating that the proposed computer-aided diagnosis system was acceptable to them. In conclusion, the proposed method incorporating radiologists' knowledge could help in eliminating radiologists' distrust of computer-aided diagnosis and improving its performance.2018, PloS one, 13(11) (11), e0207661, English, International magazine[Refereed]Scientific journal
- We aimed to evaluate a computer-aided diagnosis (CADx) system for lung nodule classification focussing on (i) usefulness of the conventional CADx system (hand-crafted imaging feature + machine learning algorithm), (ii) comparison between support vector machine (SVM) and gradient tree boosting (XGBoost) as machine learning algorithms, and (iii) effectiveness of parameter optimization using Bayesian optimization and random search. Data on 99 lung nodules (62 lung cancers and 37 benign lung nodules) were included from public databases of CT images. A variant of the local binary pattern was used for calculating a feature vector. SVM or XGBoost was trained using the feature vector and its corresponding label. Tree Parzen Estimator (TPE) was used as Bayesian optimization for parameters of SVM and XGBoost. Random search was done for comparison with TPE. Leave-one-out cross-validation was used for optimizing and evaluating the performance of our CADx system. Performance was evaluated using area under the curve (AUC) of receiver operating characteristic analysis. AUC was calculated 10 times, and its average was obtained. The best averaged AUC of SVM and XGBoost was 0.850 and 0.896, respectively; both were obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters for achieving high AUC were obtained with fewer numbers of trials when using TPE, compared with random search. Bayesian optimization of SVM and XGBoost parameters was more efficient than random search. Based on observer study, AUC values of two board-certified radiologists were 0.898 and 0.822. The results show that diagnostic accuracy of our CADx system was comparable to that of radiologists with respect to classifying lung nodules.2018, PloS one, 13(4) (4), e0195875, English, International magazine[Refereed]Scientific journal
- Elsevier Ltd, Aug. 2017, Heliyon, 3(8) (8), e00393, English[Refereed]Scientific journal
- Jul. 2017, ACADEMIC RADIOLOGY, 24(7) (7), 918 - 918, English, International magazine[Refereed]
- Springer Science and Business Media LLC, Jun. 2017, International Journal of Computer Assisted Radiology and Surgery, 12(S1) (S1), S183 - S183, English[Refereed]International conference proceedings
- May 2017, PLOS ONE, 12(5) (5), e0178217, English[Refereed]Scientific journal
- Mar. 2017, ACADEMIC RADIOLOGY, 24(3) (3), 328 - 336, English[Refereed]Scientific journal
- 2017, Innervision, 32(7) (7)シリーズ新潮流-The Next Step of Imaging Technology Vol.7 人工知能は医療に何をもたらすのか-AIを知る,考える,活用する III AIを活用する-画像診断分野を中心に 5.ディープラーニングとCT画像データベースによるCADシステムの開発
- 2017, 日本医用画像工学会大会予稿集(CD-ROM), 36thシックスライスCT画像における経時差分画像のアーチファクト低減手法
- Jan. 2017, 臨床放射線, 62(1) (1), 179 - 186, Japanese[Invited]
- Springer Science and Business Media LLC, Jun. 2016, International Journal of Computer Assisted Radiology and Surgery, 11(S1) (S1), S223 - S224, English[Refereed]International conference proceedings
- Jun. 2016, AMERICAN JOURNAL OF ROENTGENOLOGY, 206(6) (6), 1184 - 1192, English[Refereed]Scientific journal
- 2016, INTERNATIONAL JOURNAL OF CHRONIC OBSTRUCTIVE PULMONARY DISEASE, 11, 2125 - 2137, English[Refereed]Scientific journal
- Dec. 2015, AMERICAN JOURNAL OF NEURORADIOLOGY, 36(12) (12), 2400 - 2406, English[Refereed]Scientific journal
- Sep. 2015, Open Journal of Medical Imaging, 5(3) (3), 174 - 181, EnglishScientific journal
- Sep. 2015, Advances in Computed Tomography, 4(3) (3), 47 - 55, EnglishScientific journal
- Jul. 2015, JAPANESE JOURNAL OF RADIOLOGY, 33(7) (7), 441 - 447, English[Refereed]Scientific journal
- Mar. 2015, ACADEMIC RADIOLOGY, 22(3) (3), 330 - 336, English[Refereed]Scientific journal
- Mar. 2015, EUROPEAN JOURNAL OF RADIOLOGY, 84(3) (3), 509 - 515, English[Refereed]Scientific journal
- Feb. 2015, RADIOLOGY, 274(2) (2), 563 - 575, English[Refereed]Scientific journal
- 2015, MAGNETIC RESONANCE IN MEDICAL SCIENCES, 14(4) (4), 275 - 283, English[Refereed]Scientific journal
- 2015, MEDICAL IMAGING 2015: IMAGE PROCESSING, 9413, 94133P, EnglishInternational conference proceedings
- Dec. 2014, RADIOLOGY, 273(3) (3), 907 - 916, English[Refereed]Scientific journal
- Dec. 2014, EUROPEAN JOURNAL OF RADIOLOGY, 83(12) (12), 2268 - 2276, English[Refereed]Scientific journal
- Oct. 2014, ACADEMIC RADIOLOGY, 21(10) (10), 1262 - 1267, English[Refereed]Scientific journal
- Aug. 2014, EUROPEAN RADIOLOGY, 24(8) (8), 1860 - 1867, English[Refereed]Scientific journal
- Springer Science and Business Media LLC, Jun. 2014, International Journal of Computer Assisted Radiology and Surgery, 9(S1) (S1), S235 - S236, English[Refereed]International conference proceedings
- British Institute of Radiology, Jun. 2014, British Journal of Radiology, 87(1038) (1038), 20130307, English[Refereed]Scientific journal
- May 2014, EUROPEAN JOURNAL OF RADIOLOGY, 83(5) (5), 835 - 842, English[Refereed]Scientific journal
- May 2014, AMERICAN JOURNAL OF ROENTGENOLOGY, 202(5) (5), W453 - W458, English[Refereed]Scientific journal
- Apr. 2014, JOURNAL OF MAGNETIC RESONANCE IMAGING, 39(4) (4), 988 - 997, English[Refereed]Scientific journal
- Mar. 2014, AMERICAN JOURNAL OF ROENTGENOLOGY, 202(3) (3), 493 - 506, English[Refereed]Scientific journal
- Mar. 2014, AMERICAN JOURNAL OF ROENTGENOLOGY, 202(3) (3), 515 - 529, English[Refereed]
- Feb. 2014, EUROPEAN JOURNAL OF RADIOLOGY, 83(2) (2), 391 - 397, English[Refereed]Scientific journal
- Nov. 2013, European Journal of Radiology, 82(11) (11), 2018 - 2027, English[Refereed]Scientific journal
- Aug. 2013, European Journal of Radiology, 82(8) (8), 1359 - 1365, English[Refereed]Scientific journal
- Jun. 2013, AMERICAN JOURNAL OF ROENTGENOLOGY, 200(6) (6), W593 - W602, English[Refereed]Scientific journal
- May 2013, JOURNAL OF THORACIC IMAGING, 28(3) (3), 138 - 150, English[Refereed]
- 2013, 映像情報Medical, 45(1) (1)腹部画像診断の今~エキスパートによるポイント解説 腹部Perfusion CT
- 2013, 日本胸部臨床, 72(6) (6)Role of Imaging in the Diagnosis of Thymic Epithelial Tumor
- 2013, 診断と治療, 101(4) (4)何がわかる?CT&MRI 肺-特に胸部腫瘍性病変の診断について
- 2013, Innervision, 28(10) (10)呼吸器領域における画像診断の最新動向 II 呼吸器領域におけるMRIの最新技術 2.造影MRAおよびPerfusion MRIの現状と将来展望-呼吸器領域を中心に
- Dec. 2012, BRITISH JOURNAL OF RADIOLOGY, 85(1020) (1020), 1525 - 1532, English[Refereed]Scientific journal
- Oct. 2012, AMERICAN JOURNAL OF ROENTGENOLOGY, 199(4) (4), 794 - 802, English[Refereed]Scientific journal
- Sep. 2012, JOURNAL OF MAGNETIC RESONANCE IMAGING, 36(3) (3), 612 - 623, English[Refereed]Scientific journal
- Sep. 2012, AMERICAN JOURNAL OF ROENTGENOLOGY, 199(3) (3), 595 - 601, English[Refereed]Scientific journal
- Jun. 2012, EUROPEAN JOURNAL OF RADIOLOGY, 81(6) (6), 1330 - 1334, English[Refereed]Scientific journal
- Feb. 2012, EUROPEAN JOURNAL OF RADIOLOGY, 81(2) (2), 384 - 388, English[Refereed]Scientific journal
- 2012, 映像情報Medical, 44(1) (1)画像診断のポイント~胸部領域を診る 胸部領域の最新MR診断
- 2012, 画像診断, 32(6) (6)Application of Magnetic Resonance Imaging Using Ultra-Short TE for Chest Imaging
- 2012, 臨床画像, 28(7) (7)悪性腫瘍-その合理的な画像診断の進め方 再発チェックを合理的に行うには 肺癌
- 2012, 臨床画像, 28(8) (8)その検査,本当にそこまで必要ですか?経過観察のストラテジー【第4回】呼吸器・縦隔 腫瘍性疾患
- Springer Science and Business Media LLC, 2012, International Journal of Computer Assisted Radiology and Surgery, 7(S1) (S1), S263 - S263, English[Refereed]International conference proceedings
- Nov. 2011, RADIOLOGY, 261(2) (2), 605 - 615, English[Refereed]Scientific journal
- Nov. 2011, JOURNAL OF THORACIC IMAGING, 26(4) (4), 301 - 316, English[Refereed]Scientific journal
- 2010, 臨床画像, 26(10) (10)2010年版MRAとCTAの使い分け 肺血管性疾患におけるCTとMRIの位置づけ
- 2010, 神戸市立病院紀要, 48当院におけるフィルムレス運用について:PACS(Picture Archiving and Communication System)導入後の中央放射線部の状況とその問題点
- 2009, 神戸市立病院紀要, 47大規模PACS(Picture archiving and communication system)の導入とフィルムレス運用について:PACS導入前の中央放射線部の状況
- 2009, 神戸市立病院紀要, 47胸髄PNETに対し放射線治療を施行した一例
- 京都 : 日本放射線技術学会, Jun. 2024, 日本放射線技術学会雑誌 = Japanese journal of radiological technology, 80(6) (6), 673 - 678, Japanese放射線技術学研究におけるPythonの活用術 応用編(11)胸部単純X線写真の診断レポートの自動作成
- (公社)日本医学放射線学会, Feb. 2024, Japanese Journal of Radiology, 42(Suppl.) (Suppl.), 32 - 32, Japanese仙骨に発生した褐色脂肪腫の1例
- 2024, Innervision, 39(3) (3)Precision Medicine時代のAbdominal Imaging2024 前編 IV 腹部画像診断におけるITの技術革新と挑戦 1.腹部領域におけるITの最新動向 1)Transformerを用いた診断レポートの自動要約の研究
- 2024, 日本生体医工学会大会プログラム・抄録集(Web), 63rdEvaluation of Diagnostic Ability in AI Diagnosis of Chest X-ray Images
- Lead, 08 Mar. 2023, Kyoto University-University of Zurich Strategic Partnership Joint Symposium 2023Radiology report generation from chest X-ray image using 2-stage deep learning modelsSummary national conference
- (公財)日本眼科学会, Mar. 2023, 日本眼科学会雑誌, 127(臨増) (臨増), 244 - 244, Japanese人工知能は眼窩MALTリンパ腫と結膜MALTリンパ腫の鑑別が可能か?
- 2023, 核医学(Web), 60(Supplement) (Supplement)胸部PET/MRIの減弱補正:高速Zero-TE MRIを用いた深層学習によるノイズ除去および擬似CT生成
- 2023, 核医学(Web), 60(Supplement) (Supplement)ZTE MRIから2.5次元法深層学習で生成した骨要素を含む減弱補正が胸部領域のSUVに与える影響
- (一社)日本核医学会, 2023, 核医学, 60(Suppl.) (Suppl.), S184 - S184, Japanese胸部PET/MRIの減弱補正 高速Zero-TE MRIを用いた深層学習によるノイズ除去および擬似CT生成
- (一社)日本核医学会, 2023, 核医学, 60(Suppl.) (Suppl.), S206 - S206, JapaneseZTE MRIから2.5次元法深層学習で生成した骨要素を含む減弱補正が胸部領域のSUVに与える影響
- 2022, 医療情報学連合大会論文集(CD-ROM), 42ndComparison of Named Entity Extraction Methods for Case Reports
- (一社)日本核医学会, 2022, 核医学, 59(1) (1), 35 - 35, Japanese
- (公社)日本医学放射線学会, Mar. 2021, 日本医学放射線学会学術集会抄録集, 80回, S203 - S203, English超高精細CTのための深層学習に基づくイメージ超解像処理(Deep-learning-based Image Super Resolution for Super High-resolution Computed Tomography)
- (NPO)日本CT検診学会, Feb. 2021, CT検診, 28(1) (1), 11 - 11, JapaneseAI(人工知能)のCT検診への応用と最新動向 深層学習を用いた低線量CTのノイズ除去と肺結節の生成
- 2021, RSNA 2021Deep Learning-based Algorithm for Bone Metastasis Detection on CT: Evaluation on an Observer Study.Summary international conference
- 2021, ISMRM2021Automatic segmentation of uterine endometrial cancer on MRI with convolutional neural network.Summary international conference
- (公社)日本医学放射線学会, 2021, 日本医学放射線学会秋季臨床大会抄録集, 57th, S400 - S400, Japanese慢性血栓塞栓性肺高血圧症患者における,重回帰分析を用いた肺動脈平均圧推定についての検討
- 2021, 核医学(Web), 58(Supplement) (Supplement)深層学習を用いたZTE MRIによる胸部PET/MRIの吸収補正に関する定量的検証
- 2021, 日本神経放射線学会プログラム・抄録集, 50th脊髄に発生したdiffuse midline glioma,H3K27M-mutantの3例
- (一社)日本核医学会, Oct. 2020, 核医学, 57(Suppl.) (Suppl.), S157 - S157, English胸部PET/MRIの吸収補正 別症例のCTを用いてZTEから偽CTを深層学習により作成する検討
- (一社)日本核医学会, Oct. 2020, 核医学, 57(Suppl.) (Suppl.), S157 - S157, English肺癌予後予測のためのElastic Netを用いたFDG-PETのテクスチャー解析
- (一社)日本核医学会, Oct. 2020, 核医学, 57(Suppl.) (Suppl.), S174 - S174, Japanese悪性腫瘍の全身FDG PET/MRIにおけるBSREMを用いた高速撮像と診断能の検討
- 2020, RSNA 2020Anomaly Detection for a Small Amount and Highly Biased Dataset: Discrimination of Magnetic Resonance Images between Benign and Malignant Parotid TumorsSummary international conference
- 2020, ECR 2020Classification of MR Images between Benign and Malignant Parotid Tumors using Deep LearningSummary international conference
- 2020, 日本神経放射線学会プログラム・抄録集, 49th胎児MRIの頭部画像を用いたディープラーニングによる胎児の週数予測
- 2020, 核医学(Web), 57(Supplement) (Supplement)悪性腫瘍の全身FDG PET/MRIにおけるBSREMを用いた高速撮像と診断能の検討
- 2020, 核医学(Web), 57(Supplement) (Supplement)胸部PET/MRIの吸収補正:別症例のCTを用いてZTEから偽CTを深層学習により作成する検討
- 2020, 核医学(Web), 57(Supplement) (Supplement)肺癌予後予測のためのElastic Netを用いたFDG-PETのテクスチャー解析
- (公社)日本医学放射線学会, Sep. 2019, 日本医学放射線学会秋季臨床大会抄録集, 55回, S533 - S534, Japanese深層学習を用いたCT画像の骨セグメンテーション 新規Data Augmentation手法の検討
- 2019, 32nd Annual Congress of the European Association of Nuclear Medicine.Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on 18F FDG-PET/CT.Summary international conference
- SPIE, 2019, Medical Imaging 2019: Image Perception, Observer Performance, and Technology Assessment, San Diego, California, United States, 16-21 February 2019, 1095210 - 1095210[Refereed]
- (公社)日本医学放射線学会, Sep. 2018, 日本医学放射線学会秋季臨床大会抄録集, 54回, S511 - S512, Japanese臨床画像診断業務の中でのラベル付き画像データベース構築
- (公社)日本医学放射線学会, Sep. 2018, 日本医学放射線学会秋季臨床大会抄録集, 54回, S512 - S512, JapaneseFDG-PET/CTと機械学習による肺癌サブタイプの分類
- (公社)日本医学放射線学会, 25 Feb. 2018, 日本医学放射線学会総会抄録集, 77th, S252 - S252, EnglishComputer‐aided Diagnosis of Lung Nodule Using Deep Convolutional Neural Network: Usefulness of Transfer Learning
- 2018, 第20回医用画像認知研究会経時差分CT画像による脳梗塞の検出率の向上Summary national conference
- 2018, 第20回医用画像認知研究会経時差分CT画像による研修医の骨転移検出率の向上Summary national conference
- 2018, 第20回医用画像認知研究会Double U-Net for CT image reconstruction from subsampled sinogramSummary national conference
- 2018, 第77回日本医学放射線学会総会Item response theory in Radiology for "post p<0.05 era"Summary national conference
- 2018, レギュラトリーサイエンス学会誌, 8(Supplement) (Supplement)医療機器プログラムの有効性評価の試験方法に関する検討
- 日本医用画像工学会, Nov. 2017, MEDICAL IMAGING TECHNOLOGY, 35(5) (5), 257 - 266, JapaneseThick-slice CT画像における経時差分画像のアーチファクト低減手法
- (一社)日本医療情報学会, Nov. 2017, 医療情報学連合大会論文集, 37回, 458 - 461, Japanese新しい個人情報保護法制に準拠した診療情報取り出し環境の構築
- Oct. 2017, Proceeding of The 103th Annual Meeting of Radiological Society of North America (RSNA2017), EnglishTemporal CT subtraction and bone scintigraphy in detection of bone metastasis: which is more effective?[Refereed]
- Oct. 2017, Proceeding of The 103th Annual Meeting of Radiological Society of North America (RSNA2017), EnglishTemporal CT Subtraction Images Derived by Large Deformation Diffeomorphic Metric Mapping can Improve Detectability of Brain Infarctions[Refereed]
- (公社)日本医学放射線学会, Aug. 2017, 日本医学放射線学会秋季臨床大会抄録集, 53回, S504 - S505, Japanese骨転移検出における骨CT経時差分画像と骨シンチグラフィーとの比較検討 それら有用性とピットホール
- 23 May 2017, システム制御情報学会研究発表講演会講演論文集(CD-ROM), 61st, ROMBUNNO.114‐7, Japanese3次元特徴を捉えた深層学習による肺結節のコンピュータ診断支援システムの設計
- Feb. 2017, 日本医学放射線学会総会抄録集, 76th, S228‐S229, EnglishComparison of Temporal CT Subtraction and Bone Scintigraphy Images in Detection of Bone Metastasis[Refereed]
- 2017, 第9回呼吸機能イメージング研究会ホモロジーによる肺気腫の定量評価と視覚評価の関係および機械学習による視覚評価の予測についてSummary national conference
- (公社)日本診療放射線技師会, Sep. 2016, JART: 日本診療放射線技師会誌, 63(9) (9), 1134 - 1134, Japanese減弱補正用CTの線量がPET画像に与える影響について
- (NPO)日本肺癌学会, Feb. 2015, 肺癌(Web), 55(1) (1), 78(J‐STAGE) - 78, Japanese肺癌手術材料で腫瘍との鑑別が求められる粘液化生を認めた2例
- (NPO)日本肺癌学会, Feb. 2015, 肺癌, 55(1) (1), 78 - 78, Japanese肺癌手術材料で腫瘍との鑑別が求められる粘液化生を認めた2例
- 2015, ひょうご科学技術協会研究成果報告書(Web), 2015, WEB ONLY, Japanese深層学習を用いた超低線量CTのノイズ除去とその臨床応用
- (公社)日本医学放射線学会, Sep. 2014, 日本医学放射線学会秋季臨床大会抄録集, 50回, S706 - S707, Japanese前縦隔に認められたSolitary Fibrous Tumorの1例
- (NPO)日本肺癌学会, Aug. 2014, 肺癌, 54(4) (4), 243 - 243, Japanese腸型が疑われた肺腺癌の1例
- 2014, CARS 2014Computer-aided diagnosis for differentiation of lung nodules on CT: a scheme using sparse coding with spatial zoningSummary international conference
- 2014, 第73回日本医学放射線学会Emphysema Quantification on Low-Dose CT: Effect of Adaptive Iterative Dose Reduction using 3D ProcessingSummary international conference
- 2014, Joint Annual Meeting ISMRM-ESMRMB 2014Non-Contrast-Enhanced Pulmonary MR Angiography based on ECG-gated 3D time-spatial labeling inversion pulse (Time-SLIP) Technique: Influence of Tag Pulse Position for Separation of Pulmonary Arteriogram and Pulmonary VenogramSummary international conference
- 2014, RSNA 2014Novel subtracted CT angiography imaging using non-rigid registration for better visualization of spinal dural arteriovenous fistulasSummary international conference
- 2014, 第73回日本医学放射線学会学術集会Whole-Body MRI vs. Whole-Body PET/CT vs. Whole-Body PET/ MRI: Capabilities for TNM and Clinical Stage Assessment in NSCLC PatientsSummary national conference
- 2014, Joint Annual Meeting ISMRM-ESMRMB 2014Whole-Body MRI vs. Co-registered Whole-Body FDG-PET with MRI (PET/MRI) vs. Integrated FDG-PET/CT: Capability of Clinical Stage and Operability Assessments in Non-Small Cell CarcinomaSummary international conference
- 2014, Joint Annual Meeting ISMRM-ESMRMB 2014Dynamic Oxygen-Enhanced MRI vs. Quantitative Thin-Section CT: Capability for Pulmonary Functional Loss Assessment and Clinical Stage Classification in AsthmaticsSummary international conference
- 2014, Joint Annual Meeting ISMRM-ESMRMB 2014Comparative Analysis of Predictive Capability of 3D Non-Contrast-Enhanced Perfusion MRI, 3D Contrast-Enhanced Perfusion MRI, Quantitatively Assessed Thin-Section CT, and Perfusion Scan for Postoperative Lung Function in Non-Small Cell Lung Cancer PatientsSummary international conference
- 2014, 第73回日本医学放射線学会学術集会Dynamic Perfusion Area-Detector CT with AIDR 3D Method: Capability for Radiation Dose Reduction as compared with FBP MethodSummary national conference
- 2014, 第73回日本医学放射線学会学術集会3D Non-CE-Perfusion MRI: Comparison of Predictive Capability for Postoperative Lung Function with CE-Perfusion MRI and Perfusion Scan in NSCLC PatientsSummary national conference
- 2014, RSNA 2014Amide Proton Transfer (APT) Imaging for Characterization of Thoracic Nodule and Mass: Preliminary Experience as a New MR-Based Molecular Imaging Method in Thoracic OncologySummary international conference
- 2014, RSNA 2014Whole-Body FDG-PET/MRI: How to Improve the Accuracy of Clinical Stage Assessment as Compared with Whole-body FDG-PET/CT with CE-Brain MRI in Patients with Non-Small Cell Lung CancerSummary international conference
- 2014, RSNA 2014Dynamic Contrast-enhanced Perfusion Area Detector CT in Non-small Cell Lung Cancer Patients: Influence of Mathematical Model to Early Prediction Capabilities for Treatment Response and Recurrence after ChemoradiotherapySummary international conference
- 2014, RSNA 2014Newly Developed DWI Using Fast SE Sequence vs. DWI using EPI Sequence vs. FDG-PET/CT: Diagnostic Capability of N-Stage in Patients with Non-small Cell Lung CancerSummary international conference
- 2014, 第42回日本磁気共鳴医学会大会3T装置を用いたTime-SLIP 法肝動脈MRA:造影CTAとの比較.Summary national conference
- 2014, 第42回日本磁気共鳴医学会大会3T装置を用いた上腹部Computed DWI.Summary national conference
- 2014, 第42回日本磁気共鳴医学会総会Amide Proton Transfer (APT) Imagingによる胸部結節及び腫瘤の良・悪性鑑別診断に関する初期検討Summary national conference
- 2014, 15th Asian Oceanian Congress of RadiologyAdaptive iterative dose reduction using three-dimensional processing vs. Filter Back Projection: Utility for Quantitative Bronchial Assessment on Low-Dose Thin-Section MDCT in Patients with/without Chronic Obstructive Pulmonary DiseaseSummary international conference
- 2014, RSNA 2014Computed Diffusion-Weighted Imaging with High b-Value: How to Apply for Improving Pulmonary Nodule/Mass Assessment Capability with Acquired Diffusion-Weighted ImagingSummary international conference
- 2014, 第42回日本磁気共鳴医学会大会肺結節・腫瘤の同定・良悪性鑑別能に対するComputed High b-Value DWIの有用性の検討Summary national conference
- 2014, RSNA 2014Comparisons of Lung Nodule Detection Capability on Ultra-Low- and Low-Dose CTs among Newly Developed Full Iterative Reconstruction, Clinically Available Adaptive Iterative Dose Reduction 3D and Filter Back Projection Techniques in Chest Phantom StudySummary international conference
- 2014, ECR 2014Abdominal CT with Single-Energy Metal Artifact Reduction (SEMAR): Initial Experiences.Summary international conference
- 2014, ECR 2014T-SLIP MR Hepatic Arteriography at 3TSummary international conference
- 2014, ECR 2014Computed Diffusion-Weighted Image in the Abdomen.Summary international conference
- 2014, ECR 2014Abdominal CT Perfusion: Effects of Breath Control Technique.Summary international conference
- 2014, ECR 2014Optimization of Scan Interval in Abdominal CT Perfusion.Summary international conference
- 2014, ECR 2014Comparisons of capability for TNM and clinical stage assessment in non-small cell lung cancer patients among whole-body MRI, whole-body FDG-PET/CT and co-registered whole-body PET/MRISummary international conference
- 2014, ECR 2014Dynamic First-Pass Pulmonary Perfusion Area-Detector CT for Lung Nodule Assessment: Comparison of Dose Reduction Capability between Adaptive Iterative Dose Reduction using 3D Processing and Filter Back ProjectionSummary international conference
- 2014, ECR 20143D non-contrast-enhanced pulmonary perfusion MRI in non-small cell lung cancer patients: comparison of capability for postoperative lung function prediction with dynamic first-pass perfusion MRI, thin-section CT and Q scanSummary international conference
- 2014, 第6回呼吸機能イメージング研究会3D非造影Perfusion MRIによる 肺癌患者における術後肺機能予測能に関する検討Summary international conference
- 2014, 第6回呼吸機能イメージング研究会COPDにおける吸気呼気CTを用いた3次元的呼吸運動解析Summary international conference
- 2014, 第6回呼吸機能イメージング研究会複数のb値を用いた胸部MRI 拡散強調像における適切な肺結節評価の検討Summary international conference
- (一社)日本核医学会, Nov. 2013, 核医学, 50(4) (4), 308 - 309, Japanese肺癌患者の術前FDG-PET/CTによる重複癌検出能に関する検討
- (一社)日本核医学会, Sep. 2013, 核医学, 50(3) (3), S193 - S193, Japanese全身MRI、PET/CTおよびMR/PETによる非小細胞肺癌のTNM分類および病期診断能に関する検討
- (公社)日本医学放射線学会, Sep. 2013, 日本医学放射線学会秋季臨床大会抄録集, 49回, S596 - S596, Japanese巨大血腫と血胸を来した胸腺腫の1例
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S302 - S302, Japanese非小細胞肺癌患者における非造影3T MR Angiographyの術前肺血管評価能に関する検討
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S277 - S277, JapaneseDynamic Perfusion ADCT vs.Dynamic Perfusion MRI vs.PET/CT 肺結節診断および解析法の検討
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S304 - S304, Japanese全身Quick 3Dの肺癌術後再発・転移診断における役割 従来法とPET/CTとの比較
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S275 - S276, Japanese低線量CTにおける肺結節同定に対するAIDR 3Dの有用性の検討
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S276 - S276, Japanese肺気腫患者の気管支定量評価における低線量CTの有用性の検討
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S392 - S393, Japanese腹部CT perfusion 呼吸停止下撮像と安静呼吸下撮像の比較
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S346 - S346, JapaneseCT肺結節用CADシステムの検出能に対する線量と再構成関数の影響 胸部ファントムによる検討
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S252 - S252, EnglishTime-SLIP法肝動脈MRA 3T装置における初期的検討(Non-contrast MR hepatic Angiography using 3T-MRI and Time-SLIP)
- (公社)日本医学放射線学会, Feb. 2013, 日本医学放射線学会学術集会抄録集, 72回, S301 - S301, English3次元的低濃度域のサイズ分布を用いた肺気腫定量評価 呼吸機能検査との対比(Emphysema quantification using size distribution of 3D low attenuation clusters on CT: comparison with pulmonary function test)
- 2013, RSNA 2013Solitary Pulmonary Nodule: Which Parameters Would Be Better to Assess for Quantitative Diagnosis on Diffusion-Weighted MR Imaging with Multiple b-Values?Summary international conference
- 2013, RSNA 20133D Lung Motion and Destruction Assessments from Inspiratory and Expiratory Thin-Section MDCT: Utility for Pulmonary Functional Loss and Clinical Stage Evaluation in SmokersSummary international conference
- 2013, ISMRM 2013Multi-Phase Transmission RF Systems: Utility for improvement of B1 homogeneity and Image Quality on 3T MR System as compared with Single- and Multi-Transmit RF SystemsSummary international conference
- 2013, RSNA 2013Non-contrast MR Hepatic Arteriography Using T-SLIP at 3TSummary international conference
- 2013, ISMRM 2013Non-contrast MR Hepatic Arteriography using 3T-MRI and Time-SLIP: Initial Experiences.Summary international conference
- 2013, ISMRM 2013Influence of Slice-Selective Tag Thickness for Non-Contrast-Enhanced Pulmonary MR Venography based on ECG-gated 3Dtime-spatial labeling inversion pulse (Time-SLIP) techniqueSummary international conference
- 2013, RSNA 2013Abdominal CT Perfusion: Breathhold or Free Breathing?Summary international conference
- 2013, RSNA 2013Optimization of Acquisition Interval in Abdominal CT Perfusion Measurement.Summary international conference
- 2013, RSNA 2013Optimization of Contrast Medium Administration in CT Perfusion in the Abdomen.Summary international conference
- 2013, ISMRM 2013Comparison of the Utility of Contrast-Enhanced Whole-Body MRI with and without Quick 3D and Double RF Fat Suppression Techniques, PET/CT and Conventional Examination for Assessment of Recurrence in Postoperative NSCLC PatientsSummary international conference
- 2013, ISMRM 2013Comparison of Assessment of Preoperative Pulmonary Vasculature in NSCLC Patients by Non-Contrast-Enhanced and 4DContrast-Enhanced MR Angiography at 3T and by Contrast-Enhanced MDCT Using a 64-Detector Row SystemSummary international conference
- 2013, RSNA 2013Lung and Nodule Perfusion Assessments on Dynamic First-pass Perfusion Area-detector CT: Capability of Adaptive Iterative Dose Reduction Using 3D Processing (AIDR 3D) for Radiation Dose Reduction as Compared with Filter Back Projection (FBP)Summary international conference
- 2013, RSNA 2013Whole-body MRI vs. Co-registered Whole-body FDG-PET/MRI vs. Integrated Whole-body FDG-PET/CT: Capability for TNM and Stage Assessment in Non-small Cell Lung Cancer PatientsSummary international conference
- 2013, RSNA 20133D Non-contrast-Enhanced Perfusion MRI vs. 3D Contrast-enhanced Perfusion MRI vs. Perfusion Scan: Capability for Postoperative Lung Function Prediction in Non-small Cell Lung Cancer PatientsSummary international conference
- 2013, RSNA 2013Dynamic Oxygen-enhanced MRI: Capability for Pulmonary Functional Loss Assessment and Clinical Stage Classification in Asthmatics as Compared with Quantitative Thin-section CTSummary international conference
- 2013, ISMRM 2013Oxygen-Enhanced MRI vs. Thin-Section CT: Capability for Pulmonary Functional and Disease Severity Assessments inPatients with Connective Tissue DiseasesSummary international conference
- 2013, ISMRM 2013Pulmonary 3T MR Imaging with Ultra-Short TEs: Influence of Ultra-Short Echo Time on Pulmonary Functional and ClinicalStage Assessments of SmokersSummary international conference
- 2013, 第41回日本磁気共鳴医学会大会3T 装置を用いたTime-SLIP 法肝動脈MRA.Summary national conference
- 2013, 第41回日本磁気共鳴医学会大会3T 装置を用いた上腹部computed DWI の初期的検討Summary national conference
- 2013, 第41回日本磁気共鳴医学会大会3D非造影Perfusion MRI: 肺癌患者における術後肺機能予測能に関する検討Summary national conference
- 2013, 第41回日本磁気共鳴医学会大会全身MRI vs. PET/CT vs. 全身MR/PET:非小細胞肺癌におけるTNM分類および病期診断能に関する検討Summary national conference
- 2013, 第41回日本磁気共鳴医学会大会Time-SLIP法を用いた非造影MRAと造影4D-MRAによる3T MRIにおける肺静脈評価能の検討Summary national conference
- 2013, ECR 2013Comparison of the Utility of Whole-Body MRI with and without Contrast-Enhanced Quick 3D and Double RF Fat Suppression Techniques, Conventional Whole-Body MRI, PET/CT and Conventional Examination for Assessment of Recurrence in NSCLC PatientsSummary international conference
- 2013, ECR 2013Comparison of Capabilities for Differentiating Malignant SPNs from Benign SPNs among Dynamic First-Pass Perfusion Area-Detector CT, Dynamic First-Pass MRI and FDG-PET/CTSummary international conference
- 2013, 第5回日本呼吸機能イメージング研究会学術集会非造影 3T MR Angiographyによる非小細胞肺癌術前肺血管評価能に関する検討Summary national conference
- 2013, 第5回日本呼吸機能イメージング研究会学術集会Ultra-Short TEsを用いたMRIによる膠原病肺の肺機能及び重症度評価能に関する検討Summary national conference
- 2013, 第5回日本呼吸機能イメージング研究会学術集会Perfusion ADCTによる解析手法間での肺結節鑑別診断能に関する検討Summary national conference
- 2013, ECR 2013The utility of adaptive iterative dose reduction using three dimensional processing (AIDR 3D) for quantitative bronchial assessment on low-dose thin-section MDCT in patients with pulmonary emphysema in comparison with filter back projectionSummary international conference
- 2013, RSNA 2013Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR 3D) vs. Filter Back Projection: Utility for Quantitative Bronchial Assessment on Low-dose Thin-Section MDCT in Patients with Pulmonary EmphysemaSummary international conference
- 2013, 第5回日本呼吸機能イメージング研究会学術集会吸気呼気CTによる肺気腫の定量評価Summary national conference
- (公社)日本医学放射線学会, Aug. 2012, 日本医学放射線学会秋季臨床大会抄録集, 48回, S575 - S575, Japanese術待機中に空洞形成を伴った非定型的肺カルチノイドの一例
- (NPO)日本肺癌学会, Jun. 2012, 肺癌, 52(3) (3), 353 - 354, Japanese肺内転移やリンパ節転移との鑑別が困難であった扁平上皮腺上皮性混合型乳頭腫の1例
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S242 - S242, JapaneseAdaptive Iterative Dose Reduction(AIDR)3Dの胸部低線量CTにおける有用性の検討
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S207 - S207, Japanese前縦隔腫瘍の鑑別における拡散強調像の有用性の検討
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S247 - S247, Japanese肺気腫患者における気管支内腔容積評価の有用性の検討
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S248 - S248, Japanese3T MR装置におけるUltra-Short TEs MRI 膠原病肺における肺機能及び重症度評価能に関する検討
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S242 - S242, JapaneseDual-Input vs.Single-Input Perfusion CT vs.PET/CT 肺結節鑑別診断能に関する検討
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S381 - S381, JapaneseAIDR 3Dを用いた低線量CTにおける肺気腫の定量評価について
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S248 - S248, JapaneseCE-Perfusion MRI vs.CE-MRA vs.CE-MDCT 慢性肺高血圧における保存的治療効果評価能の検討
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S250 - S250, Japanese肺癌術後患者における320列MDCTによる気道計測能の比較 64列HS vs.160列HS vs.320列WVS
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S207 - S207, Japanese肺癌患者の全身3T MRIにおけるQuick 3D with Enhanced Fat Freeの有用性に関する検討
- (公社)日本医学放射線学会, Feb. 2012, 日本医学放射線学会学術集会抄録集, 71回, S211 - S211, Japanese原発性肺癌におけるリンパ路に基づいたリンパ節転移の診断能に関する検討
- 2012, RSNA 2012Multi-Phase Transmit RF System on 3T MR System: Comparison of B1 Homogeneity and Image Quality for Chest MR Imaging with Single and Multi Transmit RF SystemsSummary international conference
- 2012, RSNA 2012Emphysema Quantification Using Size Distribution of 3D Low Attenuation Clusters on CT: Comparison with Pulmonary Function TestSummary international conference
- 2012, CARS 2012Comparative assessment of local binary patterns and related texture features for classification of pulmonary emphysema in low-dose CT imagesSummary international conference
- 2012, CARS 2012Adaptive segmentation method for computer-aided volumetry of solid and subsolid lung nodules on CTSummary international conference
- 2012, ISMRM 2012T2* Measurements of 3 T MRI with Ultra-Short TE: Capability of Assessments for Pulmonary Functional Loss and Disease Severity in Patients with Connective Tissue Disease (CTD)Summary international conference
- 2012, RSNA 2012Pulmonary 3T MR Imaging with Ultra-Short TEs: Impact of Ultra-Short Echo Time for Assessment of Pulmonary Function and Clinical Stage in SmokersSummary international conference
- 2012, ISMRM 2012Oxygen-enhanced MRI vs. Quantitative CT vs. Perfusion SPECT/CT: Quantitative and Qualitative Capability to Predict Therapeutic Effect for Lung Volume Reduction Surgery CandidatesSummary international conference
- 2012, ISMRM 2012Contrast-Enhanced MDCT vs. Time-Resolved MR Angiography vs. Contrast-Enhanced Perfusion MRI: Assessment of Treatment Response by Patients with Chronic Thromboembolic Pulmonary Hypertension (CTEPH)Summary international conference
- 2012, ISMRM 2012Whole-Body 3T MRI with Newly Developed Quick 3D and Enhanced Fat Free Techniques: Capability for Distant Metastasis and/or Recurrence Assessments in Non-Small Cell Lung Cancer as Compared with Conventional Whole-Body 3T MRI and FDG-PET/CTSummary international conference
- 2012, RSNA 2012Oxygen-enhanced MRI in Patients with Connective Tissue Diseases: Capability for Pulmonary Functional and Disease Severity Assessments as Compared with Thin-Section CTSummary international conference
- 2012, RSNA 2012Whole-Body 3T MRI with and without Newly Developed Quick 3D and Double Fat Suppression Techniques vs FDG-PET/CT vs. Conventional Radiological Method: Capability for Postoperative Recurrence Assessments in Non-Small Cell Lung CancerSummary international conference
- 2012, RSNA 2012Non-contrast-enhanced 3T MR Angiography vs. Time-resolved Contrast-enhanced 3T MR Angiography vs. Contrast-enhanced 64-Detector Row CT Angiography: Preoperative Assessment of Pulmonary Vasculature in Non-small Cell Lung Cancer PatientsSummary international conference
- 2012, RSNA 2012Dynamic First-Pass Perfusion Area-Detector CT Analyzed by Newly Developed and Previously Applied Methods vs Dynamic First-Pass MRI vs FDG-PET/CT: Differential Capability of Malignant SPN from Benign SPSummary international conference
- 2012, RSNA 2012FDG-PET/CT in Patients with Lung Adenocarcinoma: Comparison of Capability for Postoperative Recurrence Prediction with Indexes suggested by GuidelineSummary international conference
- 2012, 第40回日本磁気共鳴医学会総会全身Quick 3D with Enhanced Fat Free (Double Fat Suppression Pulse) の肺癌における再発・転移診断能評価Summary national conference
- 2012, 第40回日本磁気共鳴医学会総会非造影 3T MR Angiography: 非小細胞肺癌患者における術前肺血管評価能に関する検討Summary national conference
- 2012, 第4回呼吸機能イメージング研究会学術集会肺気腫患者における気管支内腔容積評価の有用性の検討Summary national conference
- 2012, ECR 2012Adaptive Iterative Dose Reduction using Three-Dimensional Processing (AIDR 3D) for Reduced and Low-dose CT Examination: Comparison with Standard-Dose CT of Image Quality and Radiological Finding Assessment for Patients with Various Pulmonary DiseasesSummary international conference
- 2012, ECR 2012CE-Perfusion MRI vs. CE-MDCT vs. Time-Resolved CE-MR Angiography: Assessment of Treatment Response in Patients with Chronic Thromboembolic Pulmonary HypertensionSummary international conference
- 2012, 第4回 呼吸機能イメージング研究会学術集会320列MDCTによる64列Helical scan, 160列Helical scan,320列Wide volume scan の撮影法間での自動気道計測の精度比較Meeting report
- (NPO)日本CT検診学会, Jan. 2012, CT検診, 19(1) (1), 20 - 20, JapaneseAdaptive Iterative Dose Reduction(AIDR)3D 胸部精査CTの低線量化における有用性の検討
- (NPO)日本肺癌学会, Oct. 2011, 肺癌, 51(5) (5), 634 - 634, Japanese非小細胞肺癌リンパ節転移の定量的診断能 3T-MRI STIR vs.1.5T-MRI STIR vs.PET/CT
- (NPO)日本肺癌学会, Oct. 2011, 肺癌, 51(5) (5), 633 - 633, Japanese胸部MRIにおける拡散強調像とSTIR像を用いた肺小細胞癌と肺非小細胞癌の定量的鑑別の検討
- (NPO)日本肺癌学会, Oct. 2011, 肺癌, 51(5) (5), 448 - 448, Japanese320列面検出器CTによる定量Perfusion CTによる肺結節診断能 Dynamic MRIおよびPET/CTとの直接比較
- (NPO)日本肺癌学会, Oct. 2011, 肺癌, 51(5) (5), 634 - 634, Japanese胸部CTにおけるすりガラス状影を伴った病変の自動測定
- (公社)日本医学放射線学会, Sep. 2011, 日本医学放射線学会秋季臨床大会抄録集, 47回, S553 - S553, Japanese小児の気管支腫瘍の一例
- (公社)日本医学放射線学会, Feb. 2011, 日本医学放射線学会学術集会抄録集, 70回, S234 - S234, Japanese3T-MRI STIR像・1.5T-MRI STIR像及びPET/CTによる非小細胞肺癌リンパ節転移定量的診断能の比較
- (公社)日本医学放射線学会, Feb. 2011, 日本医学放射線学会学術集会抄録集, 70回, S188 - S188, Japanese孤立性肺結節の良悪鑑別におけるFDG-PET/CTの診断能 新たなSUV評価法に関する検討
- (公社)日本医学放射線学会, Feb. 2011, 日本医学放射線学会学術集会抄録集, 70回, S279 - S279, Japanese3.0 T MRI vs.定量薄層CT 喫煙に伴う肺機能障害およびCOPDの重症度評価能に関する検討
- (公社)日本医学放射線学会, Feb. 2011, 日本医学放射線学会学術集会抄録集, 70回, S342 - S342, Japanese低線量CT上の肺結節に対する同時及び第2読影CAD 標準線量CT上の第2読影CADとの比較
- (公社)日本医学放射線学会, Feb. 2011, 日本医学放射線学会学術集会抄録集, 70回, S279 - S279, Japanese320列面検出器CTによる灌流CT 肺結節の良・悪性鑑別診断能に関するDynamic MRIとPET/CTとの比較
- 2011, The 2nd Asian Congress of Thoracic RadiologDiagnostic Capability of N-staging from NSCLC; 3.0 T-STIR Turbo SE Imaging vs. 1.5 T-STIR Turbo SE Imaging vs. FDG-PET/CTSummary international conference
- 2011, The 2nd Asian Congress of Thoracic RadiologyQuantitative Bronchial Luminal Volumetric Assessment of Airflow Limitation on Thin-Section MDCT in Pulmonary Emphysema Patients
- 2011, RSNA 2011CE-Perfusion MRI vs. CE-MDCT vs. Time-Resolved CE-MR Angiography: Comparison of Capability for Treatment Response Assessment in Patients with Chronic Thromboembolic Pulmonary Hypertension after Drug TherapySummary international conference
- 2011, RSNA 2011Pulmonary MR Imaging with Ultra-Short TEs at a 3 T MR System: Utility for Pulmonary Functional Loss and Disease Severity Assessments in Connective Tissue DiseaseSummary international conference
- 2011, Joint Meeting of ESTI and the Fleischner SocietyDiffusion weighted MR imaging vs. FDG-PET/CT: Predictive capabilities of therapeutic effect and survival in non-small cell lung cancer patients before chemoradiotherapy
- 2011, RSNA 2011Newly Developed Mathematical Model for First-Pass Perfusion CT using 320-Detector Row CT in Patients with Pulmonary Nodules: Comparison of Diagnostic Capability with Previously Utilized Models for First-Pass perfusion CT and FDG-PET/CTSummary international conference
- 2011, RSNA 2011Comparison of Efficacy of STIR Turbo SE MR Imaging, Diffusion-Weighted MR Imaging and FDG-PET/CT for Quantitative and Qualitative Assessment of N-stage in Non-Small Cell Lung Cancer PatientsSummary international conference
- 2011, RSNA 2011Oxygen-enhanced MRI vs. Quantitative CT vs. Perfusion SPECT/CT: Quantitative and Qualitative Capability for Therapeutic Effect Prediction in Candidates for Lung Volume Reduction SurgerySummary international conference
- 2011, 第39回日本磁気共鳴医学会大会3T-MRI 及び1.5T-MRIのSTIR像とPET/CTにおける非小細胞肺癌リンパ節転移の定量的診断能比較
- 2011, European Congress of Radiology 2011Metastatic vs. Non-Metastatic Lymph Nodes in Non-small Cell Lung Cancer Patients: Compared Diagnostic Capability among 3.0 T-STIR Turbo SE Imaging, 1.5 T-STIR Turbo SE Imaging, and FDG-PET/CTSummary international conference
- 2011, 第3回呼吸機能イメージング研究会学術集会・第5回肺機能イメージング国際ワークショップ(共催)Low-Dose CT at FDG-PET/CT Examination:Nodule Type Assessment for Improving Diagnostic in Patients with Solitary Pulmonary NoduleMeeting report
- (一社)日本核医学会, Nov. 2010, 核医学, 47(4) (4), 500 - 500, JapaneseFDG-PETにて異常集積を認めた混合性胚細胞腫瘍の一例
- 2010, RSNA 2010Concurrent- and Second-Read Computer-Aided Detection (CAD) for Lung Nodules on Low-Dose CT: Comparison with Second-Read CAD on Standard-Dose CTSummary international conference
- 19 Aug. 2009, 日本放射線腫よう学会誌, 21(Supplement 1) (Supplement 1), 208, Japanese活動性肺結核患者に対する放射線治療の経験
- (公社)日本医学放射線学会, Feb. 2009, 日本医学放射線学会学術集会抄録集, 68回, S406 - S406, JapaneseWADOを用いたモバイルデバイスでのDICOM画像の閲覧ソフトの開発
- (公社)日本医学放射線学会, Feb. 2009, 日本医学放射線学会学術集会抄録集, 68回, S337 - S337, Japanese活動性肺結核患者に対する放射線治療の経験
- (一財)日本消化器病学会, Sep. 2008, 日本消化器病学会雑誌, 105(臨増大会) (臨増大会), A846 - A846, Japanese胃十二指腸静脈瘤に対しバルーン閉塞下逆行性経静脈的塞栓術(BRTO)を施行した17例の検討
- (公社)日本医学放射線学会, Sep. 2008, 日本医学放射線学会秋季臨床大会抄録集, 44回, S502 - S502, Japanese増大傾向を認めた後縦隔髄外造血巣の1例
- (公社)日本医学放射線学会, Apr. 2008, Radiation Medicine, 26(Suppl.I) (Suppl.I), 43 - 43, Japanese腹腔動脈狭窄に合併した膵十二指腸動脈瘤の1例
- Editor, MDPI, Dec. 2021, ISBN: 9783036526645Machine Learning/Deep Learning in Medical Image Processing
- Contributor, Chapter2の3, 株式会社 インナービジョン, Apr. 2020学ぶ! 究める! 医療AI ディープラーニングの基礎から研究最前線まで
- Contributor, Chapter 6, Taylor & Francis, May 2019Lung Imaging and CADx 1st Edition
- がん医療におけるAIの最新活用、兵庫県がん診療連携協議会「研修・教育部会セミナー」, Oct. 2023癌の画像診断における AIの応用について[Invited]Invited oral presentation
- 日本デジタルパソロジー研究会 第3回教育ウェブセミナー, Mar. 2023放射線科におけるAI診断[Invited]Nominated symposium
- Liver Imaging Seminar in Kyoto, Nov. 2022画像診断におけるデジタル化について[Invited]Public discourse
- 第81回日本医学放射線学会総会, Apr. 2022AI for diagnostic imaging of chest[Invited]Invited oral presentation
- 第13回呼吸機能イメージング研究会学術集会, Jan. 2022Deep Learningの肺結節への応用[Invited]Invited oral presentation
- 明治大学MIMS共同研究集会「AIを用いた医療画像解析の現状と課題」, Jan. 2022機械学習とホモロジー法の肺疾患への応用[Invited]Invited oral presentation
- 第8回 放射線セミナー, Oct. 2021肺がんの治療薬選択に役立つ画像と AI について[Invited]Nominated symposium
- 第28回日本CT検診学会学術集会, Feb. 2021深層学習を用いた 低線量CTのノイズ除去と 肺結節の生成[Invited]Invited oral presentation
- 明治大学MIMS共同研究集会「AIを用いた医療画像解析の現状と課題」, Nov. 2020, Japanese多重解像度処理を用いたホモロ ジー法による肺病理組織画像の 自動鑑別[Invited]Invited oral presentation
- 第48回日本磁気共鳴医学会大会, Sep. 2020, Japaneseunpaired の学習データと構造保 存損失を利用した深層学習によ るMR・CT画像変換[Invited]Invited oral presentation
- 日本医療情報学会関西支部 2020年度 第1回 講演会, May 2020, Japanese放射線科画像診断における機械学習・深層学習の活用:その現状と未来[Invited]Invited oral presentation
- AIを用いた医療画像解析の現状と課題, Nov. 2019, Japanese, 中根和昭(大阪大学), 萩原一郎(明治大学), 小林泰之(聖マリアンナ医科大学),ルイス・ディアゴ(明治大学), 廣井直樹(東邦大学), 東京、明治大学, Domestic conferenceCT肺癌検診におけるコンピュター支援診断ソフトウェアの有用性[Invited]Invited oral presentation
- JSAWI2019, Sep. 2019, Japanese, Domestic conferenceApplication of Machine Learning and Deep Learning to Diagnostic Radiology[Invited]Invited oral presentation
- 日本医療情報学会関西支部 2016年度 第2回 講演会(春の講演会)、日本医療情報学会関西支部, Mar. 2017, Japanese, Domestic conference深層学習とCT画像データベースを用いたコンピューター支援支援診断システムの開発[Invited]Invited oral presentation
- Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Fund for the Promotion of Joint International Research (International Collaborative Research), Kobe University, Sep. 2023 - Mar. 2026Application of large language models to medical natural language processing
- 日本学術振興会, 科学研究費助成事業 基盤研究(C), 基盤研究(C), 神戸大学, Apr. 2022 - Mar. 2025放射線診断学の画像とレポートを用いた深層学習の応用
- 神戸大学, 神戸未来医療構想1E, 神戸大学, Aug. 2023 - Aug. 2024, Principal investigator画像とレポートを用いた深層学習の応用による画像診断レポートのためのコンピューター支援診断システムの開発Others
- Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research Grant-in-Aid for Early-Career Scientists, Grant-in-Aid for Early-Career Scientists, Kyoto University, Apr. 2019 - Mar. 2023, Principal investigatorComputer-aided and automatic diagnosis of chest x-ray images using deep learning今年度はオープンデータベースと兵庫県の複数病院の胸部単純レントゲン写真を用いて、新型コロナウィルス肺炎の診断をAIで自動診断出来るかどうかを検討した。オープンデータベースから約25000枚、兵庫県の病院から455枚を収集し、深層学習によるAIを作成・評価した。画像の収集時には、新型コロナウィルス肺炎・それ以外の肺炎・正常のいずれかと診断されたものだけを収集対象とした。AIの作成にはEfficientNetの転移学習を用いた。評価の際には、兵庫県の病院から収集された455枚のうちの150枚を用いた。この150枚の胸部単純レントゲン写真を、6名の放射線科医と作成されたAIが評価した。評価の結果には3クラス分類の正診率などを用いた。AIの正診率は0.8667で、6名の放射線科医の正診率は0.5667~0.7733となり、AIの自動診断の正診率は良好であった。この結果により、新型コロナウィルス肺炎の診断を胸部単純レントゲン写真とAIで自動診断が出来ることが示された。また、この結果は査読付き英文誌に掲載された。
上記の結果から、兵庫県下の病院において、新型コロナウィルス肺炎の診断をAIで自動診断が出来ることは示されたが、違う施設での評価も必要である。このために、今後は兵庫県外の病院を含めた他病院の胸部単純レントゲン写真を追加して、今回作成されたAIの精度を評価する。また、今回のAIの結果が医師の読影の補助になるかどうかも併せて評価する。追加評価のために研究期間の延長を行った。 - Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (B), Grant-in-Aid for Scientific Research (B), Gifu University, 01 Apr. 2019 - 31 Mar. 2022Basic research on the construction of a database of diversity lung nodules and the development of a self-learning diagnostic imaging support systemAs basic research on the construction of a computer-assisted image diagnosis (so-called AI-CAD) system equipped with AI, a solution to the lack of medical image data necessary for learning to obtain a deep learning (deep learning) type AI model with high accuracy. Basic research on (1) research on the possibility of generating three-dimensional CT images of lung nodules, (2) research on the effectiveness of research results, (3) research on pursuit of realism, and (4) continuous learning (post-marketing learning) was mainly carried out. As a result, it was shown that it is possible to form a realistic lung nodule image that is effective in a certain range based on the Generative Adversarial Networks (GAN technique). In addition, new findings were obtained by simulation study for the three update methods for continuous learning.
- JST, 研究成果最適展開支援プログラム(A-STEP)トライアウトタイプ:with/postコロナにおける社会変革への寄与が期待される研究開発課題への支援, 神戸大学, May 2021 - Mar. 2022, Principal investigator少量学習データを用いた深層学習による新型コロナウィルス肺炎のレントゲン写真の自動診断
- 兵庫県, ポストコロナ社会の具体化に向けた補助事業, Aug. 2020 - Mar. 2021, Principal investigator新型コロナウイルス肺炎の胸部単純X線写真の自動診断
- Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research Grant-in-Aid for Young Scientists (B), Grant-in-Aid for Young Scientists (B), Kyoto University, 01 Apr. 2016 - 31 Mar. 2019Development of computer-aided diagnosis system for lung cancer CT screening using deep learningOur research was conducted based on two major themes. One is quantitative evaluation of emphysema for predicting lung cancer base risk in CT lung cancer screening, and the other is development of computer-aided diagnosis system for differentiation of lung nodules detected in lung cancer CT screening.
In the former, we show that quantitative evaluation of pulmonary emphysema using homology method was useful for predicting the base risk of lung cancer. Also, in the latter, the accuracy of computer-aided diagnosis of lung nodule could be improved using deep learning. - 京都大学医学部附属病院, 先端医療機器研究開発助成事業, 京都大学医学部附属病院, Dec. 2018 - Mar. 2019, Principal investigator敵対性生成ネットワークによる胸部単純 X 線写真の擬似病変の生成
- Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (C), Grant-in-Aid for Scientific Research (C), Kobe University, 01 Apr. 2014 - 31 Mar. 2017MR Hemodynamic Evaluations of Hepatic VasculaturesWe evaluated newly-developed 4D PC-MRA and MR fluid dynamics (MRFD) for liver vessel and disease assessments. 10 volunteers & 52 patients were enrolled. Vessel visualization were satisfactory and improved after EOB. Hemodynamic assessment could be done in all vessels except for PHAs with diameters of <4mm. Significant differences were only found in all shear stresses in vessel type comparisons. Blood flow and velocity were significantly highest in aorta and lowest in PHA. WSS and SWSSG were significantly highest in CA and lowest in RHV. OSI was significantly highest in aorta and lowest in lt. PV. GON was significantly highest in aorta and lowest in SMV. Significant positive correlation to CP score was found in flow, velocity, WSS, SWSSG of PHA, WSS of SPA, and flow of left PV, and negative one in GON of RHV. Bile duct disorder increased hepatic arterial flow. MRFD can characterize hepatic vessels and WSSs provide additional information in liver disease assessments.
- 公益財団法人ひょうご科学技術協会, 学術研究助成, 2015 - 2016, Principal investigator深層学習を用いた超低線量CTのノイズ除去とその臨床応用
- 日本医学放射線学会, Bayer 研究助成金, 2014 - 2015, Principal investigator超低線量CTの臨床応用について
- 中谷医工計測技術振興財団, 技術交流助成, 中谷医工計測技術振興財団, 2013, Principal investigatorEmphysema Quantification on Low-Dose CT by Percentage of Low-Attenuation Volume and Size Distribution Analysis of Low-Attenuation Clusters: Effect of Adaptive Iterative Dose Reduction using 3D Processing
- 情報処理装置、情報処理方法及びプログラム特願2024-166142, 25 Sep. 2024, キヤノン株式会社, 特開2024-175130, 17 Dec. 2024Patent right
- 情報処理装置、情報処理方法、及びプログラム特願2018-201985, 26 Oct. 2018, キヤノン株式会社, 特開2023-089277, 27 Jun. 2023, 特許第7604552号, 13 Dec. 2024Patent right
- 情報処理装置、情報処理方法及びプログラム特願2019-121957, 28 Jun. 2019, キヤノン株式会社, 特開2023-168534, 24 Nov. 2023, 特許第7562799号, 27 Sep. 2024Patent right
- 情報処理装置、情報処理方法及びプログラム特願2023-172591, 04 Oct. 2023, キヤノン株式会社, 特開2023-168534, 24 Nov. 2023Patent right
- 情報処理装置、情報処理方法、及びプログラム特願2023-070598, 24 Apr. 2023, キヤノン株式会社, 特開2023-089277, 27 Jun. 2023Patent right
- 画像解析方法、画像解析装置、画像解析システム、画像解析プログラム、記録媒体JP2018040977, 05 Nov. 2018, 国立大学法人大阪大学, 特許第7264486号, 17 Apr. 2023Patent right
- 画像処理装置、画像処理方法及びプログラム特願2018-202866, 29 Oct. 2018, キヤノン株式会社, 特開2020-068870, 07 May 2020, 特許第7229721号, 17 Feb. 2023Patent right
- 情報処理装置、情報処理方法及びプログラム特願2019-121957, 28 Jun. 2019, キヤノン株式会社, 特開2021-007510, 28 Jan. 2021Patent right
- 画像処理装置、画像処理方法及びプログラム特願2018-202866, 29 Oct. 2018, キヤノン株式会社, 特開2020-068870, 07 May 2020Patent right
- 情報処理装置、情報処理方法、及びプログラム特願2018-201985, 26 Oct. 2018, キヤノン株式会社, 特開2020-067957, 30 Apr. 2020Patent right
- 画像解析方法、画像解析装置、画像解析システム、画像解析プログラム、記録媒体JP2018040977, 05 Nov. 2018, 国立大学法人大阪大学, WO2019-102829, 31 May 2019Patent right
- 医用画像処理装置特願2014-128462, 23 Jun. 2014, 東芝メディカルシステムズ株式会社, 国立大学法人神戸大学, 特開2016-007270, 18 Jan. 2016, 特許第6510189号, 12 Apr. 2019Patent right
- 医用画像処理装置特願2014-128462, 23 Jun. 2014, 東芝メディカルシステムズ株式会社, 国立大学法人神戸大学, 特開2016-007270, 18 Jan. 2016Patent right
- 24 Oct. 2023, 60位、Mizuho Nishio、引用数5,421, 2023年10月2日時点で、総引用件数の多い順に20,000番目(総論文引用数1,130件に相当)までピックアップしております。この20,000名から日本人らしい方を抽出したところ、238名いらっしゃいました(表1)。比率にすると1.19%ということになります。表にある通り、この238名の所属は国内のみではなく、全世界のアカデミアや企業です。Internet
- Aunt Minnie, Oct. 2023, https://www.auntminnie.com/clinical-news/digital-x-ray/article/15636715/covid19-ai-model-boosts-disease-detection-by-radiologistsCOVID-19 AI model boosts disease detection by radiologistsInternet