Is Facebook a victim of rapid growth or an abuser of user data?
Or a simple and fixable case of control failure at a company that was growing too fast and broke a few too many things along the way?
The public narrative about Facebook has swung between these two poles all year. It matters which represents the truer picture. If Facebook's slips are just a result of excessive growth, then tighter controls backed up by some stiffer external regulation should do the trick.
But if the fundamental corporate culture and design of its core service fail to put users' interests at the centre, the remedial action will inevitably fall short.
As 2018 winds to a close, it is still no clearer how this story will end. The seemingly endless revelations about Facebook's mis-steps can be used to feed either narrative.
This week, for instance, brought news that the company struck one-off agreements that gave other companies including Microsoft and Amazon sight of data about the social network's users and their friends. In some cases this was information that had not been made public, and access continued well after a 2015 change in direction to restrict access to data by third-party developers.
This is partly about lax control of the technical infrastructure built to support Facebook's data economy. Some of the application programming interfaces, or APIs, that other companies used to integrate Facebook data into their services were not shut down when the data-access arrangements they supported came to an end.
It is easy to see this as a small slip. But this very laxness hints at a worrying disregard. If putting users first was at the heart of Facebook's culture, then wouldn't it have built stronger processes to protect users from the ground up?
Similarly, the incident could lead to different conclusions about fundamental product design. Handing user data to other companies meant Facebook could spread its social pixie dust around the internet: it meant, for instance, that you could do a search for a movie on Microsoft's Bing and have the views of your Facebook friends included in the results.
But it also created a data ecosystem in which personal information was handed to many other companies, with no transparency about whether its use was being controlled.
A decade ago, when Mark Zuckerberg first evangelised about the benefits of adding a social dimension to all online activity, there was a nagging worry that self-interest as much as user utility was driving the move. According to Mr Zuckerberg, a new generation was happy to spread its personal information much more widely. By implication, the old conception of “privacy” was an outdated construct.
Control of language was important. Facebook told its users they were “sharing” their personal information who could object to such a benign-sounding thing? The rhetoric reeked of self-interest: it supported a stunning attempt to extend Facebook's influence far beyond its own services and out into the broader internet.
In this light, the trove of internal Facebook emails seized and published by British MPs earlier this month make fascinating reading. The emails were gathered as evidence in a California legal case that has been brought against Facebook by a developer.
Despite efforts to paint the messages as evidence that Facebook was bent on selling user data, there is no real smoking gun. But what emerges is a picture of a company that is more intent on how to further its own interests and create new ways of making money than it is on protecting its users. Inconvenient questions of how to balance privacy against business development do not intrude.
One possible response to this is that the emails do not capture the full picture, and only give a partial view of the debate inside Facebook. But that does not detract from the sense that Facebook executives up to and including Mr Zuckerberg saw personal information as a commodity to be bartered.
It is not only Facebook's handling of personal data that raises troubling questions like this. Revelations over the past two years about the wide dissemination of fake news and election-influence campaigns raise basic questions about product design. Has Facebook built an information ecosystem that is fundamentally inimical to the civic interests of its users?
A huge effort is under way inside the company to tighten controls on personal data and limit the spread of disinformation. Regulations in specific areas, such as election advertising, could help.
It is hard to tell, though, whether this will get to the root of the problem. With any luck, by the time 2019 draws to a close it will be a lot clearer whether the attempt to tame Facebook has any chance of success.