By Ané de Klerk
Artificial Intelligence (AI) has become a tool used almost daily by most. While it can certainly be useful, the writer was recently reminded of how it can also cause harm when used without the required critical engagement and skill. The reminder came in the form of a student’s submission of AI generated answers to an assignment aimed at testing students’ ability to read and understand coursework on Community Schemes Management. The assignment was designed to test students’ ability to apply their knowledge to solve practical problems and answer questions they are likely to encounter when managing schemes. Unfortunately, AI proved itself dismally unqualified to provide suitable answers to these questions (sometimes missing even the most straightforward of elements) and the generated answers did not come close to earning a passing mark.
Despite its shortcomings, AI can certainly be very helpful if used as a tool. For example, it could save time and add value when performing the following tasks:
- Drafting correspondence (AI is particularly helpful when it comes to adjusting the tone of different types of correspondence to suit the circumstances)
- Calculating quorum requirements
- Calculating votes cast
- Pointing managing agents and/or trustees to relevant legislation dealing with specific topics (which they should then study themselves)
- Providing a summary of what transpired at meetings
In essence, as long as AI is treated like a tool, used by a skilled managing agent or trustee who has the ability to evaluate its outcomes and make the necessary adjustments to it, it can be very useful. It must be emphasised however that AI is not equipped to replace these roles. In fact, with AI being used so often it is more important than ever that schemes have trustees and managing agents who know and understand the law, enabling them to evaluate AI outcomes and to deal with erroneous claims based on such outcomes. Similarly, it is vital that owners of units or homes in community schemes gain knowledge on the legal requirements for managing community schemes to enable them to spot when their trustees, directors or managing agents are following incorrect AI generated advice blindly and/or missing important information left out by whatever AI tool the relevant roleplayer was using. A great way for any community scheme roleplayer to obtain such knowledge is through our online short courses – learn more about them on our website by clicking here.
In acknowledging that AI can be a useful tool, but becoming increasingly aware of its shortcomings and dangers, it is clear that AI use must be regulated. To address this need, the Department of Communications and Digital Technologies publicised a Draft South Africa National Artificial Intelligence Policy, aimed at furthering responsible and ethical development and use of AI, for public comment. In an ironic twist, the draft was widely criticised for appearing to have been created with the use of AI tools. It was fraught with inaccuracies and false or misleading content presented as facts – as is often the case when AI tools are used. The Department has since wisely withdrawn the draft policy. The writer echoes the Law Society of South Africa’s stance on the matter, being that:
“The situation provides a practical illustration of why robust governance, human oversight, verification, accountability and domain-expert review are indispensable in any high-impact use of AI.”
While the Department’s first stab at it may have been unsuccessful, the need to regulate AI use remains. As Azhar Aziz-Ismail, Chairperson of the LSSA AI Committee put it:
“South Africa cannot afford for this moment to become a retreat from AI policy development. If anything, it should sharpen our collective resolve to build a credible, inclusive and accountable framework for responsible AI.”
While the country awaits the outcome of the department’s second bite of the cherry, managing agencies would be wise to draft AI use policies of their own. This would not only be beneficial to the organisation internally (ensuring employees know and understand when AI use is and is not appropriate), but could also be shared with clients to show them how the particular agency allows AI use to perform simple tasks and free up managing agents’ time, while emphasising that it will not replace that all important human element.
Specialist Community Scheme Attorney (BA, LLB), Ané de Klerk, is a Director of The Advisory, a boutique consultancy specialising exclusively in community schemes law. Her focus is legal education, which includes presenting seminars and running online and in-person training programs and courses.