It sounds as if you’ve trained to a discrete classification, but you want continuous output. Switch your algorithm to do regression, rather than classification.
Another possibility is to harness your last-layer output to interpolate. Use the weights given to the top choice and its strongest adjacent choice. For instance, if your classification gives
1 .01
2 .05
3 .56
4 .24
5 .14
… you would interpolate with 56 parts 3
and 24 parts 4
, to get 3.7 degrees
as your output.
Does that help?
UPDATE
(1) how can I switch to regression from classification?
This is far too broad for Stack Overflow; you need to do your research first. The difference between the two is not trivial. You would need to ask a specific question, which requires posting a new question that includes your current code and your work toward making the switch.
(2) while I am predicting values from output, how do I know that I am looking for 3.7 degrees … ?
While you’re predicting, you don’t know; that would have been an issue for training. The example I gave is just an illustration of a possible result. I invented an example, since you gave no details on your data.
(3) whose parts should I select?
I recommended that you take the top guess (the one that would have been your integer-value classification), and the more probable of the adjacent values. In my example, 3
is the top guess. You look at 2
and 4
, and see the 4
is more likely than 2
, so use 4
for the interpolation’s other endpoint.
2
solved Deep learning to predict the temperature