
Using AI to Create Visuals for Salesforce Documentation
Embedding AI-Generated Visuals in Salesforce Documentation
Overview: AI-Generated Visuals in Technical Documentation
AI-generated visuals are emerging as a powerful aid for technical writers, enabling the rapid creation of architecture diagrams, flowcharts, and other illustrations from natural language descriptions. In the context of Salesforce architecture and documentation, these visuals can dramatically improve how complex systems are communicated. By converting descriptions of CRM systems, data flows, or integration processes into diagrams, large language models (LLMs) like GPT-4 can help document designers explain Salesforce solutions more clearly (Source: medium.com). This accelerates the documentation process and enhances reader understanding, as visual diagrams often convey structure and sequence better than text alone.
Generative AI image models (such as OpenAI’s DALL·E 3) also offer the ability to produce custom illustrations or conceptual art to accompany technical content. For example, members of the Salesforce community have started to leverage AI for visuals in their publications. Salesforce architect Bob Buzzard (Keir Bowden) has experimented with using DALL·E 3 to generate illustrative images for blog posts on Salesforce topicsbobbuzzard.blogspot.com. These AI-created images can add visual interest or metaphors to documentation (e.g. a DALL·E image symbolizing a “Ship Happens” continuous deployment scenariobobbuzzard.blogspot.com), making technical guides more engaging. Overall, AI-generated visuals – whether schematic diagrams or creative graphics – support Salesforce documentation by saving time and providing clearer, up-to-date visual explanations of complex ideas.
AI Tools for Generating Salesforce Diagrams
Modern LLMs and generative models can produce two broad types of visuals for documentation:
-
Diagram Code Generation with LLMs (GPT-4): GPT-4 and similar LLMs can translate a plain-language description of an architecture or process into diagram markup languages like PlantUML or Mermaid. PlantUML and Mermaid are text-based syntax systems for creating diagrams (UML charts, flowcharts, sequence diagrams, etc.) that can be rendered into images. Using GPT-4 to generate these definitions allows a writer to get a first draft of a diagram without manual drawing (Source: medium.com)(Source: bool.dev). For instance, a writer might prompt: “Generate a sequence diagram of a Salesforce login flow with a user, the Salesforce server, and an authentication service. Use Mermaid syntax.” The LLM can output a Mermaid sequence diagram code reflecting that description. One example in a related context is asking ChatGPT to explain how DNS works in a sequence diagram; the AI produced a correct Mermaid diagram defining user, browser, DNS resolver, etc., saving significant manual effort (Source: bool.dev). GPT-4 can similarly produce UML diagrams (class diagrams, flowcharts, deployment diagrams) by describing Salesforce components and relationships – for example, outlining a Salesforce data model with Accounts, Contacts, and a custom object could yield PlantUML code for a class diagram. The key benefit is speed and accuracy in capturing the described structure. Writers may use a library of prompt templates to ensure consistent output format (e.g. always requesting the diagram in a specific syntax and style).
-
Generative Image Models (DALL·E, Stable Diffusion): For visuals that are less about precise architecture and more about conceptual or decorative illustrations, image-generation models can be employed. A technical writer could prompt DALL·E with a request for an architecture schematic or an infographic-style image. For example: “An isometric illustration of a Salesforce cloud architecture with databases and integration arrows, in a flat icon style.” The model might produce a unique image to use in an overview section. These tools can also create cover images for documentation pages (as Bob Buzzard did, using an AI image to represent an “evil AI assistant” scenario (Source: bobbuzzard.blogspot.com)). However, purely generative images have limitations: they might not precisely follow Salesforce’s official iconography or could introduce inaccuracies (e.g., a DALL·E image might depict generic cloud servers rather than Salesforce-specific components). Thus, they are usually used to supplement documentation (for visual appeal or conceptual diagrams), while structured diagrams (like UML-style charts) are better handled with text-to-diagram tools for precision.
By combining GPT-4 for diagram code and models like DALL·E for creative illustrations, Salesforce documentation teams can cover both precise technical diagrams and engaging conceptual visuals. In all cases, human oversight is essential to verify that the AI-generated visuals correctly represent the Salesforce features or architecture being documented.
Workflow Overview: From Prompt to Published Diagram
Embedding AI-generated visuals into a Git-backed documentation site involves a series of steps that integrate LLM outputs with documentation tools. Below is a step-by-step workflow illustrating how a technical writer can go from an idea to an auto-updated diagram in a Salesforce documentation portal (assuming a docs-as-code approach using Docusaurus, PlantUML/Mermaid, Git, and CI/CD):
-
Draft the Diagram Prompt: The process begins with the writer formulating a prompt for the LLM describing the desired diagram. For example, suppose the writer needs an architecture diagram of a Salesforce integration. They might write a prompt like: “Draw a system architecture diagram in PlantUML. Show a Salesforce Sales Cloud org, a middleware API gateway, and an external billing system. Indicate data flows from Salesforce to the API gateway and to the billing system.” The prompt can be refined with directives (e.g., “use PlantUML deployment diagram syntax with nodes and components”). Using a consistent prompt template helps ensure the AI’s output is in the correct format (for instance, always starting with
@startuml
and ending with@enduml
for PlantUML). -
Generate Diagram Code with the LLM: The writer feeds the prompt to GPT-4 (via ChatGPT or an API integration). The LLM responds with diagram code in the requested syntax. For instance, GPT-4 might return:
plantuml
Copy
@startuml actor User node SalesforceOrg <<Salesforce>> { component \"Sales Cloud\" as SC } node APIGateway { component \"Middleware API\" as API } node BillingSystem { component \"Billing DB\" as DB } User --> SC: uses; SC --> API: sends data; API --> DB: billing info; @enduml
This code is a textual representation of the diagram. In practice, ChatGPT’s output quality is high – it converts natural language descriptions into valid PlantUML syntax that can be rendered (Source: medium.com)(Source: medium.com). If Mermaid was requested instead, a similar text block in Mermaid syntax would be produced. The technical writer reviews this output, ensuring it matches the intended architecture (and asking the LLM to adjust if something is missing or incorrect, using iterative prompts). Notably, LLMs can refine diagrams through dialogue: for example, “Add an arrow showing the API Gateway calling back to Salesforce” would prompt GPT-4 to insert the additional relationship in the code. This iterative refinement leverages the LLM’s memory to incrementally improve the diagram (Source: medium.com)(Source: medium.com).
-
Incorporate Diagram Code into Documentation: Once satisfied, the writer integrates the diagram into the documentation source. There are two common methods:
-
Embedding as Diagram Source (Mermaid in Markdown): If using Docusaurus (or similar static site generators) with Mermaid, the writer can paste the code directly into a Markdown/MDX file inside a triple-fenced code block annotated with
mermaid
. For example:\`\`\`mermaid
-
graph TD;
SC[Salesforce Sales Cloud] --> API[Middleware API Gateway];
API --> DB[Billing System];
Copy
``Docusaurus v2/v3 supports rendering Mermaid diagrams from such Markdown code blocks when the mermaid plugin is enabled:contentReference[oaicite:11]{index=11}:contentReference[oaicite:12]{index=12}. This means the diagram will appear on the documentation page automatically (converted either at build time or client side via the Mermaid JS library). The diagram’s **textual source** lives in version control alongside the documentation content. - **Storing as PlantUML and Referencing an Image:** If using PlantUML or if a raster/SVG image is preferred, the workflow might involve saving the AI-generated PlantUML text to a file (e.g., `IntegrationDiagram.puml`) in the docs repository. The writer would then include an image reference in the Markdown, pointing to the expected output image (for example: `` in the MD file). Initially, that `IntegrationDiagram.svg` might not exist – it will be produced by an automated step later. The key is that the source `.puml` file (with the PlantUML code) is added to the repo, so it can be converted to an image by tooling. In both cases, the *LLM-generated diagram description is now part of the Git-backed documentation*: either directly embedded as Markdown (Mermaid text) or as a separate text file (PlantUML) referenced by the docs. This approach treats diagrams as code, aligning with the **Docs as Code** philosophy where documentation and its assets are plain-text and under version control:contentReference[oaicite:13]{index=13}:contentReference[oaicite:14]{index=14}. 4. **Commit and Push to Version Control:** The writer commits the changes to the Git repository (e.g., on GitHub or GitLab). The commit will include the new or modified documentation page and the diagram source file (if using an external `.puml`). Version control now has a record of the diagram’s textual description, which is crucial for maintainability. Teams can perform code review on this commit: reviewers might read the PlantUML/Mermaid text to verify it correctly represents the system. Having the diagram in text form makes diffs human-readable – one can see what changed in the diagram between versions, just like code changes:contentReference[oaicite:15]{index=15}. This is a major advantage over binary images where a change is opaque. 5. **Automated Diagram Rendering via CI/CD:** With the new content pushed, a Continuous Integration workflow kicks in to handle diagram generation and site publishing. Tools like **GitHub Actions** or GitLab CI are configured to detect changes to documentation and diagram files: - A CI pipeline might be set to trigger whenever files with diagram code (e.g., `*.puml` or markdown containing mermaid blocks) are updated:contentReference[oaicite:16]{index=16}. For example, a GitHub Actions workflow can be configured under `.github/workflows/` to run on each push to the main docs branch, filtering for diagram file changes:contentReference[oaicite:17]{index=17}. - **Install Diagram Renderers:** The CI job spins up an environment (often an Ubuntu runner) and installs the necessary tools. For PlantUML, that means installing Java and GraphViz (for UML graph rendering) and downloading the PlantUML jar:contentReference[oaicite:18]{index=18}:contentReference[oaicite:19]{index=19}. If Mermaid CLI or Kroki will be used, those are set up here as well. - **Generate Images:** The workflow then runs a script or action to convert diagram text into image files. In our example, the action could search the repo for any `.puml` files and run PlantUML on each to produce an SVG: ```bash find ./docs -name '*.puml' -exec java -jar plantuml.jar -tsvg {} + ``` This command finds all PlantUML files and generates corresponding `.svg` images in place:contentReference[oaicite:20]{index=20}. The `-tsvg` flag specifies SVG output (which is preferred for clarity and scalability). Similarly, for Mermaid, one could use the Mermaid CLI to generate PNG/SVG, or use a service like **Kroki**. Kroki provides an API to generate diagrams from text (supporting PlantUML, Mermaid, and more) and can be used in CI or at runtime:contentReference[oaicite:21]{index=21}:contentReference[oaicite:22]{index=22}. Alternatively, if Docusaurus mermaid plugin is used, explicit image generation might be skipped – the diagrams will render during site build. But pre-generating images in CI is helpful for source control and for any pipeline that needs actual image files (for PDF docs or older site setups). - **Embed or Commit the Outputs:** After rendering, the CI pipeline makes the diagrams available for the site. In a pure static site build, the generated SVGs could simply reside in the build output. In a docs-as-code repo, one can choose to **commit the generated SVG files back to the repository**. The GitHub Action can be given permission to commit on behalf of a bot user and push the new/updated images:contentReference[oaicite:23]{index=23}. This way, the repository always contains the latest version of each diagram image alongside its `.puml` source. Marco Siccardi describes this approach of auto-rendering and committing PlantUML diagrams in a version-controlled workflow: the action adds any changed `.svg` diagram and commits with a message like "docs: auto-rendered PlantUML diagrams":contentReference[oaicite:24]{index=24}. Committing the image isn't strictly required (some teams prefer not to version binary outputs), but it can be convenient for review and for other contexts (like viewing the image directly in the repo or embedding it in wiki pages). In any case, by the end of this step, the documentation site’s content (Markdown/MDX and any images) is updated and ready to be published. 6. **Documentation Site Build and Deployment:** The next stage in CI is to build and deploy the documentation site. For a Docusaurus site, the `npm build` (or `docusaurus build`) command is run. During this static build, Mermaid diagrams in the content will be converted to SVG internally (if not already done in CI) and PlantUML images referenced will be linked. The presence of the up-to-date `.svg` files (generated in the previous step) means the markdown reference `` now finds a matching image file. The static site generator includes those images in the final site output. Finally, the CI job deploys the site, for example by uploading to GitHub Pages or an S3 bucket, or via Netlify/Vercel. After deployment, the live documentation site will display the new AI-generated diagrams as part of the pages. The visuals are now *automatically integrated* — the writer did not manually draw or upload any diagram image; they only provided text (the prompt and the diagram code), and the pipeline handled the rest. 7. **Continuous Updates and Versioning:** Over time, if the Salesforce architecture or process changes, the diagram can be updated by editing the PlantUML/Mermaid source or even re-generating it with an LLM (using the original prompt plus new details). Each update flows through the same pipeline, ensuring the documentation and its visuals stay current. Because everything is version-controlled, one can trace when and why a diagram changed (for example, a commit might say *"Updated integration diagram to add new billing API endpoint"* and the diff would show the added component in the text). This practice upholds documentation quality in an agile environment: diagrams evolve with the system. The **Docs as Code** approach combined with AI generation means that maintaining diagrams is as quick as updating a few lines of text, which lowers the barrier to keeping architecture diagrams in sync with reality:contentReference[oaicite:25]{index=25}. The above workflow demonstrates how an LLM-generated diagram can seamlessly move from a writer’s prompt to a published, versioned visual on a Salesforce docs site. Each tool has a specific role, summarized in the table below: | **Tool/Component** | **Role in Workflow** | |---------------------------|---------------------------------------------------------------| | **GPT-4 (LLM)** | Converts natural language descriptions into diagram code (PlantUML/Mermaid). Assists in quick creation of UML diagrams and flows:contentReference[oaicite:26]{index=26}. | | **PlantUML / Mermaid** | Diagram definition languages. PlantUML supports UML diagrams (class, sequence, etc.) via text scripts; Mermaid focuses on flowcharts, sequences, etc., and integrates with many static site tools:contentReference[oaicite:27]{index=27}:contentReference[oaicite:28]{index=28}. | | **Docusaurus** | Static documentation site generator. Supports embedding Mermaid diagrams directly in Markdown:contentReference[oaicite:29]{index=29}. Hosts the content and provides the template for docs-as-code site (navigation, theming, etc.). | | **Git & GitHub/GitLab** | Version control for docs and diagrams. Stores markdown files, .puml/.mmd sources, and generated images, enabling tracking and collaboration on documentation:contentReference[oaicite:30]{index=30}. | | **GitHub Actions / CI** | Automates diagram rendering and site deployment. Runs scripts to generate images from PlantUML/Mermaid code:contentReference[oaicite:31]{index=31}, and publishes the updated static site. Ensures consistency and saves manual work. | | **Kroki (optional)** | Diagram as a Service tool that could be used to render diagrams on the fly or during build, supporting PlantUML, Mermaid, GraphViz, etc., through a web API:contentReference[oaicite:32]{index=32}:contentReference[oaicite:33]{index=33}. Useful if one prefers not to install heavy dependencies in CI. | | **Generative Image Model**| (Optional) Tool like DALL·E used to create illustrative images to complement diagrams. These images are added to docs as static assets (after writer selection and review). | With this toolchain, technical writers can focus on *what* the diagram should convey, while automation handles *how* it’s rendered and integrated. ## Integrating Diagrams into Documentation and CI/CD A critical aspect of this workflow is the seamless integration of diagrams into a version-controlled, continuously deployed documentation system. This section highlights best practices in that integration: - **Diagrams as Code in Git:** Storing diagrams in a textual form (PlantUML, Mermaid, etc.) alongside the documentation ensures they are treated like source code. This brings multiple benefits: version history, the ability to do code reviews, and easy diffing of changes. A teammate can review a pull request and see, for example, that a *line was added to the sequence diagram showing a new API call*. This transparency makes collaboration on docs much easier than exchanging binary diagram files. As an example, one workflow only triggers the diagram generation action when a `.puml` file changes, and even restricts to a specific branch (like `docs` branch):contentReference[oaicite:34]{index=34}. Such fine-grained control is possible because the diagrams are first-class citizens in the repository, not external files. - **CI/CD Pipeline for Docs**: Employ a CI/CD pipeline that includes documentation deployment. In a GitHub setup, this might use GitHub Actions to run jobs on each push. In GitLab, a similar pipeline (with a `.gitlab-ci.yml`) can be used. The pipeline should: 1. Install dependencies (diagram renderers, Docusaurus or static site generator). 2. Build or generate diagrams (as covered in the previous section). 3. Build the documentation site (e.g., run Docusaurus to produce static files). 4. Deploy the site (e.g., publish to GitHub Pages, or another hosting service). With such automation, once the writer merges their changes, the updated documentation (with new diagrams) is live in minutes without manual steps. This tight integration encourages frequent documentation updates, since the overhead is low. - **Handling Mermaid vs PlantUML**: If using **Mermaid**, one advantage is that platforms like GitHub and GitLab can render Mermaid diagrams natively in README files and wikis, and Docusaurus directly supports it:contentReference[oaicite:35]{index=35}. This could reduce the need for pre-generating images. The trade-off is that Mermaid might not support every UML feature that PlantUML does, but it covers most needs (flow charts, sequence diagrams, gantt, class diagrams, etc.). **PlantUML**, on the other hand, has a rich feature set (including more formal UML notations and the ability to use sprites/icons). PlantUML may require the extra CI step to generate the images, but as we showed, this can be fully automated:contentReference[oaicite:36]{index=36}. Teams should choose based on familiarity and requirements: if the Salesforce documentation requires standardized UML or certain layout control, PlantUML might be preferable; if quick integration and simplicity are key, Mermaid could suffice. - **Using GitHub Actions Marketplace Tools**: There are pre-built Actions in the GitHub Marketplace that simplify PlantUML generation. For instance, *“Generate PlantUML”* actions exist which can take PlantUML code and produce an image artifact:contentReference[oaicite:37]{index=37}. These can be plugged into workflows without writing a custom script from scratch. They often support caching the PlantUML installation or using a PlantUML server for faster turnaround. - **Documentation Site Configuration (Docusaurus specifics)**: In Docusaurus, enabling Mermaid requires adding the `@docusaurus/theme-mermaid` package and setting `markdown.mermaid: true` in the config file:contentReference[oaicite:38]{index=38}. Once configured, any code block labeled as `mermaid` will render as a diagram. It’s also possible to create interactive or dynamic diagrams by using React components (Docusaurus allows MDX, so one could import a `<Mermaid>` component for dynamic generation:contentReference[oaicite:39]{index=39}, though this is advanced). For PlantUML, since there's no official plugin, teams use the approach of pre-generating images or use a remark plugin that calls a PlantUML server during the build. A community plugin like **remark-kroki** can be integrated into Docusaurus build pipeline to auto-render PlantUML and other diagrams at build time:contentReference[oaicite:40]{index=40}. This plugin routes diagram definitions (embedded in markdown) to a Kroki server and embeds the returned image. In an enterprise setting, one might host a private Kroki server (or PlantUML server) internally to handle these render requests securely. - **Integration with Salesforce Documentation Repos**: If the Salesforce documentation is maintained in a Git repo (for example, a team might maintain internal implementation docs or a knowledge base in Markdown), the same principles apply. The docs repo can include the CI steps to manage diagrams. In some cases, Salesforce project teams use tools like **MkDocs** or **AsciiDoc** for docs – those have similar plugins for Mermaid/PlantUML. The workflow described is not limited to Docusaurus; it generalizes to any static site or wiki that allows custom CI. The key integration point is that version control triggers automation, ensuring *consistency* between what’s described in text and what appears visually in the documentation. ## Maintaining Versioned Documentation with AI-Generated Visuals When incorporating AI-generated diagrams into an enterprise documentation set, maintaining accuracy and version alignment is paramount. Below are best practices for keeping visuals in sync with documentation and system changes: - **Store the Source and the Prompt**: Always keep the source diagram text under version control. If the diagram was created via GPT-4, consider also storing the prompt (at least in commit messages or as a comment in the diagram file). For example, in a PlantUML file you might include: `'/ Prompt: "Generate a diagram of ..."` as a comment at the top. This way, months later, someone can see the original intent or even reuse the prompt to regenerate the diagram with updated parameters. This practice provides transparency into AI contributions and eases future edits (either manual or AI-assisted). - **Human Review and Validation**: AI-generated content should be reviewed by a subject matter expert or at least the technical writer themselves for correctness. LLMs can occasionally produce *plausible but incorrect* details (a phenomenon known as hallucination). In a Salesforce architecture diagram, for instance, GPT-4 might misname a component or assume a connection that doesn’t exist. Catching these issues is crucial before the diagram is published as authoritative documentation. It’s advisable to **treat the AI’s first output as a draft** – use it as a starting point, then fix or refine any inaccuracies. The iterative prompting capability of GPT-4 makes this easier: the writer can instruct the model to correct mistakes or add missing elements in subsequent prompts:contentReference[oaicite:41]{index=41}. Still, ultimate approval should come from a human who understands the Salesforce context. - **Consistency in Updates**: As the system evolves, decide on a process for updating diagrams. If minor changes are needed, a technical writer can manually edit the PlantUML or Mermaid code directly – this is often faster than re-prompting the AI for small tweaks. For larger structural changes, a fresh GPT prompt might be useful. In either case, ensure the textual description in documentation (the surrounding text) and the diagram stay consistent. Because everything is versioned, changes to one should ideally be accompanied by changes to the other in the same commit (e.g., if a new microservice is added to the architecture, add it in both the architecture description paragraph and the diagram file together). **Branching and pull requests** can help coordinate such updates, with code reviews verifying that the visuals and text align in each feature or fix. - **Version Tags and Diagram Versions**: In a software project, one might tag releases in Git; it’s important to be able to access the documentation (and diagrams) as they were at any release. By storing diagrams as code, you automatically get this benefit – checking out an older tag or branch of the docs repo will give you the exact diagram definitions of that time, which can be rendered to see historical architecture. Some teams even automate generation of diagrams per release and publish them version-wise on the doc site (Docusaurus supports versioned docs). AI-generated diagrams need the same versioning considerations. It may be useful to mention in documentation metadata which tool or model was used to generate a visual and when, especially if reproducibility is a concern. - **Avoiding Binary File Conflicts**: One challenge with versioning images is that binary files can’t be merged. By emphasizing the diagram source, we mitigate this – two people can work on the text definition of a diagram and merge via Git’s normal mechanisms. If both edited a diagram image, one would override the other. Thus, maintain a **single source of truth as text**. If the CI commits images, treat those as derived artifacts. Typically, you wouldn’t have authors manually edit the SVGs – they edit the `.puml` or markdown, and CI updates the SVG. This clear separation prevents confusion. - **Documentation of AI Usage**: In an enterprise setting, it might be a policy to document where AI was used in content creation. If so, consider adding a note in your documentation style guide or repo README about the use of LLMs for diagrams. This note could mention that *“diagram XYZ was initially generated by GPT-4 and then reviewed/edited by our team.”* This kind of transparency can be helpful internally for trust. (External readers typically don’t need to know the diagram’s origin, as long as it’s correct; however, if publishing externally, ensure no accidental confidential prompt data is embedded in images or metadata). - **Regular Audits**: As with any documentation, scheduling periodic reviews is wise. In these, verify that the AI-generated diagrams still reflect the actual system. If the Salesforce implementation changes (new integrations, objects, or processes), update the diagrams promptly. The low effort required to update a PlantUML diagram (just change a line of text) makes it more likely that these will be kept current, compared to legacy approaches where diagrams might be neglected due to the hassle of redrawing them in Visio or Lucidchart. Leveraging AI again for updates is also an option: you can feed the old diagram and new requirements to GPT-4 and ask it to output a revised diagram code. By following these practices, teams ensure that AI-assisted visuals remain a reliable part of a living, versioned documentation set, rather than a one-off experiment. ## Limitations, Challenges, and Ethical Considerations While the fusion of AI and documentation offers many benefits, it also introduces certain challenges and ethical questions that professionals should keep in mind: - **Accuracy and Hallucination:** LLMs like GPT-4 do not have perfect knowledge, especially about proprietary or very recent Salesforce features, unless provided context. They might **“hallucinate”** – producing diagram elements that sound plausible but are incorrect. For example, an LLM might erroneously draw a connection between Salesforce and a database that doesn’t exist, or mislabel a Salesforce component (calling a *Flow* an *Apex Trigger* by mistake). It’s crucial that technical experts validate every AI-generated diagram. The AI will follow the prompt literally, even if the instructions lead to an unreasonable or wrong outcome:contentReference[oaicite:42]{index=42}. As Bob Buzzard humorously pointed out in an example, a generative AI will comply with instructions that a human would recognize as faulty, underscoring the need for oversight:contentReference[oaicite:43]{index=43}. In documentation, this means never blindly trust an AI diagram without review. - **Security and Confidentiality:** Using external AI services to generate diagrams can pose confidentiality risks. Salesforce architectures often contain sensitive details (object models, integration endpoints, etc.). If those details are included in a prompt to an LLM API, they are effectively leaving the organization’s boundary and going to a third-party service (OpenAI, etc.). Enterprises should weigh this risk. **Mitigations** include: using anonymized descriptors in prompts (e.g., use abstract names instead of real project names), leveraging on-premise or private LLM instances if available, or using AI tools that run locally. For image generation, some organizations deploy internal stable diffusion models to avoid sending data to public services. Always comply with company policies about data exposure when using generative AI. - **Consistency and Style:** AI-generated visuals might have inconsistent styling if not guided. For instance, GPT-4 might produce different diagram styles in separate sessions – one sequence diagram might label participants differently than another, or DALL·E might generate images in varying art styles. To maintain a cohesive documentation style, writers should enforce guidelines: e.g., always instruct GPT diagrams to use certain naming conventions (*use CamelCase for component names*, or *color the Salesforce node in blue* if the renderer allows). With Mermaid, you can specify themes (like `%%{init: {'theme':'base'}}%%` in the code) to keep a uniform look. If Salesforce has official icons or design guidelines (like the **Salesforce Architecture Icons** library), purely AI-generated diagrams won’t automatically use them. One approach is to post-process the diagram (since PlantUML allows custom sprite icons, one could replace a generic database icon with the official Salesforce icon if needed). Another approach is to use AI as a starting point and then polish the diagram manually to fit style standards – though this adds some manual effort back. - **Ethical Use of AI in Content:** Enterprise documentation represents the company’s knowledge and authoritative guidance. Introducing AI into this process raises questions: Do we need to disclose that content was AI-generated? (Currently, there’s no requirement to disclose in documentation, as long as the info is verified and accurate – the focus is on correctness, not authorship. However, internal transparency about AI usage can be part of ethical AI practice.) Also, consider intellectual property: OpenAI grants rights to use the images its DALL·E model creates, but if using community models or images, ensure you have the rights to incorporate them. Avoid generating images that contain logos or trademarks (e.g., asking DALL·E to draw the Salesforce logo might produce an incorrect or infringing image). For text-based diagrams, IP is less of an issue, since they are essentially code that the team curates. - **Model Limitations and Updates:** LLMs are trained on past data. They might not know the latest Salesforce product (for example, a prompt about *Salesforce Genie architecture* could confuse an older model if not updated). Always verify that the AI’s knowledge is up to date or provide context in the prompt. Additionally, as models evolve (GPT-4 to GPT-5, etc.), the output for the same prompt might change. This could affect maintaining diagrams long-term – a future model might format PlantUML slightly differently. Locking in on one model version or being prepared to make slight adjustments is wise. It’s analogous to how code might produce slightly different results with a new compiler version – manageable with testing and review. - **Overreliance and Skill Erosion:** Over time, heavy reliance on AI for diagrams might erode the team’s own familiarity with diagramming tools or their system’s architecture details. It’s important that using AI remains a help, not a crutch. Technical writers and architects should still understand how to manually adjust diagrams and not treat the AI as an infallible source. Encourage the use of AI to handle rote tasks (like layout, syntax) but ensure humans are making the design decisions (what components are needed, how they connect). - **Performance and CI Considerations:** Generating images in CI is generally quick (PlantUML is fast for typical diagrams), but very large or numerous diagrams could slow down the build. Ensure your CI environment has adequate resources. If builds become slow, consider generating diagrams on demand or caching results. Similarly, if every edit triggers image regeneration, make sure the Actions workflow is configured to only run when needed (as shown by filtering paths:contentReference[oaicite:44]{index=44}) to avoid unnecessary work. - **User Trust in Documentation:** End users of the documentation might not know or care that AI was involved – they only see the final diagrams. The goal is to ensure those diagrams are trustworthy. As a form of ethical practice, some teams do an extra validation step: e.g., have an architect review each diagram and sign off that “this correctly represents the system.” This is no different from normal docs, except it’s acknowledging that an AI helped create it, so one wants to double-check the nuances. If a mistake slips through, it should be corrected quickly just as any doc erratum would be. In summary, while generative AI offers powerful new capabilities for creating Salesforce documentation visuals, it must be applied with caution. With careful review, security awareness, and style consistency checks in place, the benefits can far outweigh the risks. Technical writers can iterate faster and devote more time to high-level clarity, letting AI handle some of the heavy lifting of diagram drafting. ## Conclusion Embedding AI-generated visuals into Salesforce documentation can significantly streamline the creation and maintenance of technical content. By harnessing GPT-4 to produce diagram definitions and using tools like PlantUML or Mermaid to render them, technical writers and architects can document complex Salesforce architectures and processes with greater efficiency and precision. The workflow outlined – from LLM prompt to automated CI/CD deployment – demonstrates that **docs-as-code** practices marry well with AI assistance. All diagram sources are preserved as code, ensuring that every visual is reproducible, editable, and versioned alongside the prose. This modern approach to documentation turns what used to be a labor-intensive task (manually drawing and updating diagrams) into a largely automated pipeline. It encourages more frequent updates and fosters a single source of truth for both text and visuals in Salesforce guides. Moreover, it opens the door to creativity: writers can easily generate supplementary illustrations using generative image models, adding more visual context for readers without commissioning graphic designers for each diagram. Importantly, professionals adopting these techniques should implement the proper checks and balances – thorough review of AI outputs, compliance with security policies, and alignment with documentation standards – to maintain the high quality and trustworthiness expected in enterprise content. When done right, AI-generated visuals can be a game-changer: **Salesforce documentation becomes more dynamic, accurate, and easier to maintain**, keeping pace with the rapid evolution of the Salesforce platform itself. With references and tools now available in the community (from Bob Buzzard’s experiments to CI templates and plugins), teams have a solid foundation to start integrating AI into their documentation workflow:contentReference[oaicite:45]{index=45}:contentReference[oaicite:46]{index=46}. The result is a win-win: faster documentation cycles for writers and clearer, up-to-date visual explanations for readers, all backed by the power of automation and artificial intelligence. **Sources:** - Lars de Ridder, *“Generating PlantUML Diagrams with ChatGPT,”* *Medium*, Apr. 2023 – Discusses using ChatGPT to create UML diagrams from natural language:contentReference[oaicite:47]{index=47}:contentReference[oaicite:48]{index=48}. - bool.dev, *“How to generate architecture diagrams with ChatGPT,”* Aug. 2024 – Provides examples of prompting ChatGPT for Mermaid sequence diagrams and integrating with Draw.io:contentReference[oaicite:49]{index=49}:contentReference[oaicite:50]{index=50}. - Marco Siccardi, *“Version Control Your Diagrams: Automated PlantUML Rendering with GitHub Actions,”* MSicc’s Blog, Jul. 2025 – Describes a docs-as-code approach with PlantUML, including a GitHub Actions pipeline to generate and commit SVG diagrams:contentReference[oaicite:51]{index=51}:contentReference[oaicite:52]{index=52}. - Docusaurus v3 Documentation – *“Diagrams,”* Feb. 2024 – Explains how to enable Mermaid diagrams in Markdown for a Docusaurus site:contentReference[oaicite:53]{index=53}:contentReference[oaicite:54]{index=54}. - Bob Buzzard (Keir Bowden), *Bob Buzzard Blog*, 2024 – Salesforce architect’s blog featuring AI-generated visuals (DALL·E and GPT-4) used to illustrate Salesforce AI scenarios:contentReference[oaicite:55]{index=55}:contentReference[oaicite:56]{index=56}. - *WorkingSoftware.dev – Documentation as Code Tools (2023)* – Overview of diagram-as-code tools like PlantUML and Mermaid, and platforms like Kroki for rendering diagrams via API:contentReference[oaicite:57]{index=57}:contentReference[oaicite:58]{index=58}. - Bob Buzzard, “The Evil Co-Worker presents Evil Copilot – Your Untrustworthy AI Assistant,” Aug. 2024 – Cautionary tale highlighting the need to test and review AI outputs (in a Salesforce context):contentReference[oaicite:59]{index=59}.``
About Cirra
About Cirra AI
Cirra AI is a specialist software company dedicated to reinventing Salesforce administration and delivery through autonomous, domain-specific AI agents. From its headquarters in the heart of Silicon Valley, the team has built the Cirra Change Agent platform—an intelligent copilot that plans, executes, and documents multi-step Salesforce configuration tasks from a single plain-language prompt. The product combines a large-language-model reasoning core with deep Salesforce-metadata intelligence, giving revenue-operations and consulting teams the ability to implement high-impact changes in minutes instead of days while maintaining full governance and audit trails.
Cirra AI’s mission is to “let humans focus on design and strategy while software handles the clicks.” To achieve that, the company develops a family of agentic services that slot into every phase of the change-management lifecycle:
- Requirements capture & solution design – a conversational assistant that translates business requirements into technically valid design blueprints.
- Automated configuration & deployment – the Change Agent executes the blueprint across sandboxes and production, generating test data and rollback plans along the way.
- Continuous compliance & optimisation – built-in scanners surface unused fields, mis-configured sharing models, and technical-debt hot-spots, with one-click remediation suggestions.
- Partner enablement programme – a lightweight SDK and revenue-share model that lets Salesforce SIs embed Cirra agents inside their own delivery toolchains.
This agent-driven approach addresses three chronic pain points in the Salesforce ecosystem: (1) the high cost of manual administration, (2) the backlog created by scarce expert capacity, and (3) the operational risk of unscripted, undocumented changes. Early adopter studies show time-on-task reductions of 70-90 percent for routine configuration work and a measurable drop in post-deployment defects.
Leadership
Cirra AI was co-founded in 2024 by Jelle van Geuns, a Dutch-born engineer, serial entrepreneur, and 10-year Salesforce-ecosystem veteran. Before Cirra, Jelle bootstrapped Decisions on Demand, an AppExchange ISV whose rules-based lead-routing engine is used by multiple Fortune 500 companies. Under his stewardship the firm reached seven-figure ARR without external funding, demonstrating a knack for pairing deep technical innovation with pragmatic go-to-market execution.
Jelle began his career at ILOG (later IBM), where he managed global solution-delivery teams and honed his expertise in enterprise optimisation and AI-driven decisioning. He holds an M.Sc. in Computer Science from Delft University of Technology and has lectured widely on low-code automation, AI safety, and DevOps for SaaS platforms. A frequent podcast guest and conference speaker, he is recognised for advocating “human-in-the-loop autonomy”—the principle that AI should accelerate experts, not replace them.
Why Cirra AI matters
- Deep vertical focus – Unlike horizontal GPT plug-ins, Cirra’s models are fine-tuned on billions of anonymised metadata relationships and declarative patterns unique to Salesforce. The result is context-aware guidance that respects org-specific constraints, naming conventions, and compliance rules out-of-the-box.
- Enterprise-grade architecture – The platform is built on a zero-trust design, with isolated execution sandboxes, encrypted transient memory, and SOC 2-compliant audit logging—a critical requirement for regulated industries adopting generative AI.
- Partner-centric ecosystem – Consulting firms leverage Cirra to scale senior architect expertise across junior delivery teams, unlocking new fixed-fee service lines without increasing headcount.
- Road-map acceleration – By eliminating up to 80 percent of clickwork, customers can redirect scarce admin capacity toward strategic initiatives such as Revenue Cloud migrations, CPQ refactors, or data-model rationalisation.
Future outlook
Cirra AI continues to expand its agent portfolio with domain packs for Industries Cloud, Flow Orchestration, and MuleSoft automation, while an open API (beta) will let ISVs invoke the same reasoning engine inside custom UX extensions. Strategic partnerships with leading SIs, tooling vendors, and academic AI-safety labs position the company to become the de-facto orchestration layer for safe, large-scale change management across the Salesforce universe. By combining rigorous engineering, relentlessly customer-centric design, and a clear ethical stance on AI governance, Cirra AI is charting a pragmatic path toward an autonomous yet accountable future for enterprise SaaS operations.
DISCLAIMER
This document is provided for informational purposes only. No representations or warranties are made regarding the accuracy, completeness, or reliability of its contents. Any use of this information is at your own risk. Cirra shall not be liable for any damages arising from the use of this document. This content may include material generated with assistance from artificial intelligence tools, which may contain errors or inaccuracies. Readers should verify critical information independently. All product names, trademarks, and registered trademarks mentioned are property of their respective owners and are used for identification purposes only. Use of these names does not imply endorsement. This document does not constitute professional or legal advice. For specific guidance related to your needs, please consult qualified professionals.