반응형

데이터 병렬 처리 (DATA PARALLELISM)

tutorials.pytorch.kr/beginner/blitz/data_parallel_tutorial.html

 

device = torch.device("cuda:0")
if torch.cuda.device_count() > 1:
	print("Let's use", torch.cuda.device_count(), "GPUs!")
	# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
	model = torch.nn.DataParallel(model)

model.to(device)

 

반응형