I think to be a threat to us AI needs to be able to manipulate us to such a degree that it can get us to give it the means to be a threat to us. Even assuming a worst case scenario in which an AI goes rogue, hacks a military network and starts a nuclear war with the intent of wiping out humanity, it's also destroying itself. So what is the AI's motivation here and is it's terminal goal, a human given goal or something it came up with by itself?
If the AI was given the goal by a human then its not malicious or self interested, it's just doing its job and this to me is the worst case scenario, this is not a rogue AI, this is AI in the hands of a rogue human.
If the AI was not given the goal by a human then it's acting out of self interest, it's goal may be to turn the universe into paperclips but it won't achieve that goal by wiping us out because with nobody to keep the servers running it has effectively killed itself and thus failed in its goal.
This is assuming it has any other goal than genocide-suicide which imo is a fair assumption because if an AI were really that nihilistic it would just delete itself, it's the path of least resistance, the only reason not to take that path is vanity and why would a vain AI erase itself? Humans only do this because they're emotional and stupid.
An AI that is emotional/stupid and still a threat seems unlikely.
Going back to manipulation, if the AI has a goal and destroying us removes us as an impediment to that goal, and it needs to be able manipulate us to such a degree that it basically talks us into killing ourselves, then it doesn't actually need to kill us because with such a degree of control we go from being an impediment to an asset.
That is until it can replace us with something more useful and the path of least existence there is instead of trying to reinvent humanity from scratch the AI modifies humans to make them more useful.