A disturbing trend in real-world deployment of AI systems has been a tendency of users to inflict or act out 'abuse' on those systems. Whether in the context of virtual artificial assistants or social robots, some users appear to use the system with an intention to inflict damage, insult, disrespect, or somehow demean the AI system. However, pinning down what, if anything, is wrong or troublesome about such behaviour is not easy. It is not obvious that norms that govern good human-human interactions should also apply to acceptable human-AI interactions. Consequently, it is unclear to what extent, if any, an engineer designing such an AI system should attempt to minimise the chances of such interactions occurring. In this talk, I briefly introduce some ideas from moral philosophy to help make sense of this issue and to identify, what if any, moral duties we owe to AI systems.