This article argues that conscious attention exists not so much for selecting an immediate action as for using the current task to focus specialized learning for the action-selection mechanism(s) and predictive models on tasks and environmental contingencies likely to affect the conscious agent. It is perfectly possible to build this sort of a system into machine intelligence, but it would not be strictly necessary unless the intelligence needs to learn and is resource-bounded with respect to the rate of learning versus the rate of relevant environmental change. Support for this theory is drawn from scientific research and AI simulations. Consequences are discussed with respect to self-consciousness and ethical obligations to and for AI.