Fast.ai and tensors on different devices

This is a small post on (relatively) low-level issue that came up during my poking around with fast.ai. I was trying to peek into the parameters of fast.ai model trained on a gpu (specifically a collaborative filtering model). But

learn.model.i_bias(tensor([1]))

responded with

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)

I want to delve deeper into why this is an issue later, but how do we get around it for the time being? The error message is very clear on the issue, so first of all let’s see what model the device is on, using this recommendation on StackOverflow:

next(learn.model.parameters()).device

--Output--
device(type='cuda', index=0)

This suggests that the model is on my gpu. My natural guess is that tensor([1]) by default initializes the tensor into system memory (i.e. the memory attached to my CPU). If I make explicit the device on which I want to create the tensor

learn.model.i_bias(tensor([1], device='cuda:0'))

--Output--
tensor([[-0.1299]], device='cuda:0', grad_fn=<EmbeddingBackward0>)

there’s no complaints.