A practical guide for Australian public servants

AI at work without the worry.

Confident, safe Microsoft Copilot use, starting with cases that never touch agency information.

The Australian Government is actively adopting Artificial Intelligence (AI) to enhance public service delivery, productivity, and policy evidence, guided by the Policy for the responsible use of AI in government (updated Dec 2025). This means that the APS are being actively encouraged to engage with AI tools, however many people are not confident on how to engage with this technology safely.

This guide is designed to provide support and potential use cases for exploration. It is intentionally designed to showcase how these cases can be used across the Copilot licence levels. If you are looking for additional use cases, or would like these prompts adapted for different tools, please reach out.

Pick the section written for you

All APS levels

Start here

Build confidence with prompts that only access public data and will be genuinely helpful to your working day.

Go to Start here
All APS levels

Full working guide

The complete guide including all use cases by Copilot licence tier, scheduled workflows, and a prompt to meet the requirement for efficiency gains.

Go to the guide
APS 6 to EL1

Managers and coaches

Build AI capability in your team without cutting corners on data handling.

Go to managers
SES Band 1 to 2

SES leaders

Practical advice to help you lead this transformation with confidence.

Go to SES

Know your ground

Three Copilot setups, three risk profiles

Before you start, work out which setup your agency uses. The answer changes what is in scope for you. If you do not know, that is your first question for ICT.

Web Copilot

Consumer mode, or signed in without enterprise data protection. Treat it as a public tool. Public information only. No official content, no personal information, no classified work.

Microsoft 365 Copilot

Runs inside your agency tenant with enterprise data protection. Can reference internal content you already have access to, subject to agency policy.

Combined licence

M365 Copilot and web mode with commercial data protection. Useful, but needs discipline about which mode you are in and what you are putting in.

If you cannot confirm a use case is approved for your role, treat the tool as the most restrictive version.

For all APS levels. Build confidence first.

Start here

If you are wary about using AI at work, you are not being difficult. You are being professional. Privacy, security, and information handling obligations exist for good reasons, and they matter.

This section is for people who want to build a bit of confidence before they go near real agency work. Three things to try this week. Each one uses no agency information at all.

One rule worth holding on to

Public information in, public-information-shaped output. Stay in that lane until your agency’s ICT, security, or AI governance team has told you a broader use is approved for your role.

1. Learn something you have been meaning to understand

Works on: all Copilot tiers | Risk: low

Pick a framework, concept, or policy area you know you should understand better. Ask Copilot to teach it to you.

Copilot prompt
Act as a patient, plain-English teacher. I want to build a working understanding of [topic].

Start with a two-paragraph orientation for someone with no prior knowledge. Then give me the five things a practitioner would need to know, and three common misconceptions with what is actually true.

Use Australian examples where relevant. No jargon unless you define it.
Why it is low risk. You are not telling it anything about your work. You are asking it to teach you.

What to change: the topic, the depth you want, and whether you would like a test question at the end.

2. Get a shaped morning brief on your portfolio area

Works on: all Copilot tiers (web mode) | Risk: low

Copilot can scan publicly available Australian news and announcements and tell you what is relevant to your portfolio.

Copilot prompt
Scan publicly available Australian sources from the last 24 hours for news relevant to [portfolio area].

Give me three to five items in plain English, one sentence each. For each item, add one line on why it matters for someone working in this area.

Keep the whole brief under 300 words. Flag anything where source reliability is uncertain.
Why it is low risk. Public information in, shaped summary out. No agency content involved.

What to change: the portfolio area, the jurisdictions you care about, the word count.

3. Rehearse a tricky conversation with a generic scenario

Works on: all Copilot tiers | Risk: low when the scenario is synthetic

Copilot can play the other party in a roleplay. Keep the scenario generic and the names made up.

Copilot prompt
Act as [generic role, for example a frustrated community member] in a roleplay. The scenario: [generic setup with no real names, projects, or locations].

Play the role with realistic emotional range. Start the conversation. I will respond in character.

After six exchanges, pause and tell me what I handled well and two things I could have done better.
Why it is low risk. Nothing about the scenario is real.

What to change: the scenario, your role, the number of exchanges, the type of feedback that would help.

When you are ready for more

The full guide has more use cases across all three Copilot tiers, plus scheduled workflows and a bottleneck audit you can run on your own week. There is no prize for moving fast on something that needs care.

Go to the full guide

For all APS levels. Ready to go deeper.

The full working guide

A working set of low-risk, high-value ways to use Microsoft Copilot when you work for government. Structured by Copilot tier. Designed to sidestep data handling concerns where possible, and to flag them plainly where they apply.

Two principles underneath all of this

First. If a use case can be done without any agency information, do it that way. Public information, synthetic examples, and conceptual work sidestep most risk concerns in one move.

Second. Lateral beats obvious. Summarising your inbox is low value and, depending on content, higher risk. Helping you think, research, and scan external signals is high value and, structured well, lower risk.

Part one: use cases that touch no agency data

These work on any Copilot tier because they never reference government information. Start here if you are still building confidence.

1. Proactive media and announcements scan

Works on: all tiers (web mode) | Risk: low

A recurring scan across publicly available sources for news relevant to your portfolio. The output is not a news feed. It is a shaped brief that tells you what happened, why it matters, and where action or commentary may be needed.

Copilot prompt
Act as a seasoned Australian Public Service media monitor. You are good at spotting what matters to [portfolio area] at [agency], and translating it for a senior audience.

Scan publicly available Australian sources from the last 24 hours. Include ministerial media releases, Hansard, Senate committee announcements, peak body statements, AAP, ABC, The Guardian, and the major state and territory newspapers.

Return a brief in three parts:
1. What happened. Three to five items, plain Australian English, to APS writing standards.
2. Why it matters for [portfolio area] at [agency]. One line of implication per item.
3. Watch items. Anything a policy officer, an EL1, or an SES Band 2 should have on their radar today and over the next 7 days.

Keep the whole brief under 400 words. Flag anything where source reliability is uncertain. If you are not sure a claim is accurate, say so, do not present it as fact.
Why it is low risk. You are never describing your agency’s position, your internal advice, or sensitive work. You are consuming public signal and asking for shape.

What to change: the portfolio area, the jurisdictions, your trusted sources, the length. For a weekly version, ask for a Friday wrap instead.

2. Stakeholder mapping from public sources

Works on: all tiers (web mode) | Risk: low

When a new Minister, Secretary, or committee chair is appointed, or when you pick up a new brief, use Copilot to build a public-information stakeholder picture. Their speeches, interviews, committee remarks, and stated priorities.

Copilot prompt
I need a public-information briefing on [name, current role].

Using only publicly available Australian sources (official bios, speeches, Hansard, committee transcripts, media interviews from the last 24 months), give me:

1. Stated priorities and recurring themes in their public remarks.
2. Three direct quotes with dates and sources, showing how they talk about [topic area].
3. Any public views on [specific policy question].
4. A short note on stylistic tendencies (detail-oriented vs high-level, data-driven vs values-driven, formal vs plain-spoken).

Include source links. Flag anything uncertain or where sourcing is thin.
Why it is low risk. Public record in, public record out. Useful before any meeting where the other party has a stated position.

What to change: the person, the topic, the time window.

3. Concept learning coach

Works on: all tiers | Risk: low

Use Copilot as a patient teacher for frameworks, concepts, or domains you need to understand but have not had time to study. Useful when moving between adjacent policy areas, or when a promotion lands you in a brief you do not know cold yet.

Copilot prompt
Act as a patient, plain-English teacher. I want to build a working understanding of [concept or framework, for example the Commonwealth Procurement Rules, or actuarial risk in social policy].

Teach me in these stages:
1. A two-paragraph orientation, assuming no prior knowledge.
2. The five most important things a practitioner would need to know.
3. Three common misconceptions, and what is actually true.
4. A short scenario-based question so I can test my understanding. Wait for my answer before giving feedback.

Use Australian examples where relevant. No jargon unless you define it.
Why it is low risk. No agency content. You are learning.

What to change: the concept, the depth, the examples. If you have a specific use for the learning, tell it so the examples land close to real work.

4. Plain-English and accessibility coach (on synthetic text)

Works on: all tiers when text is synthetic | Risk: low

If you want to sharpen your writing without pasting real work, write a short synthetic paragraph on the type of content you usually produce. Not your actual content. Something that mirrors the structure and vocabulary without containing any real information.

Copilot prompt
Here is a short synthetic example of the kind of writing I produce: [paste 150 to 250 words of fabricated but representative text].

Act as a plain-English coach. Give me:
1. A rewritten version that meets Australian Government Style Manual plain-English standards.
2. A short list of the patterns you changed and why (for example, passive to active, bureaucratic nominalisations, hedging).
3. Three rules I can take into future writing.

Keep your tone direct and practical, not academic.
Why it is low risk. The text is made up. You get the technique transfer without exposing anything real. For feedback on a real draft, only do this in M365 Copilot within your tenant, and confirm your agency has approved that use.

What to change: the synthetic example, and the standards you want applied (plain English, specific reading level, particular audience).

5. Roleplay a difficult conversation (generic scenario)

Works on: all tiers, scenario must be synthetic | Risk: low

Stakeholder escalation. A performance conversation. A community meeting that could go sideways. Copilot can play the other party so you can rehearse. Keep the scenario generic, the names fictional, the agency unnamed.

Copilot prompt
Act as [role, for example a concerned community member] in a roleplay. The scenario is:

[Brief generic setup, no real names, projects, or locations. For example: "A local resident is upset about a consultation process for a generic infrastructure project. They feel unheard and are escalating."]

Play the role with realistic emotional range. Start the conversation. I will respond in character as the [role, for example agency representative].

After six exchanges, pause and give me feedback on:
1. What I handled well.
2. What I missed or could improve.
3. Two alternative lines I could have used at key moments.
Why it is low risk. Nothing about the scenario is real.

What to change: the scenario, your role, the number of exchanges, the feedback focus.

6. Public consultation and submission landscape scan

Works on: all tiers (web mode) | Risk: low

Before you start drafting advice on a policy question, use Copilot to map what is already in the public record. Submissions to past inquiries, peak body positions, academic commentary. You are not getting the answer. You are getting the terrain.

Copilot prompt
I am researching the public landscape on [policy question, for example regulation of AI in recruitment].

Scan publicly available Australian sources and return:
1. The three to five most cited positions, with who holds them.
2. Points of genuine disagreement, as opposed to surface-level framing differences.
3. Any recent (last 12 months) shifts in the debate.
4. Peak bodies, academics, or think tanks whose work on this is worth reading.

Provide source links for each point. Flag any claim you are not confident about.
Why it is low risk. Public sources in, public map out. No agency position disclosed.

What to change: the topic, the time window, whether to include international comparisons.

Part two: use cases that benefit from M365 Copilot grounding

These run inside your agency tenant with enterprise data protection. They use content you already have access to. Always check your agency has enabled the relevant features and confirmed the use cases.

7. Expertise finder across your agency

Works on: M365 Copilot only | Risk: medium, uses existing access

“Who in my agency has worked on something like this before.” Copilot can search across content you have permission to see, and surface people, projects, or documents relevant to a question you are wrestling with.

Copilot prompt
I am working on [short non-sensitive description of the problem space]. Search the content I have access to across SharePoint, Teams, and email, and identify:

1. People who appear to have worked on similar problems. Return names, roles, and the documents or conversations that indicated their involvement.
2. Documents that address related questions.
3. Any relevant past decisions or precedents.

Rank results by how closely they match. Exclude anything older than three years unless foundational.
Data handling note. This uses your existing access. You are navigating content already cleared for you, not creating new exposure. Still, confirm your agency’s AI governance body has approved this pattern.

What to change: the problem description (keep it high-level), the time window, the sources. You are asking it to find people, not to read briefs.

8. Orientation to a new project or team area

Works on: M365 Copilot only | Risk: medium, uses existing access

When you pick up a new brief, ask Copilot to help you get up to speed by orienting you to the shape of the content in that area. Not to summarise individual documents. To map the territory.

Copilot prompt
I have just picked up [project or team area]. Give me a first-day orientation using the content I have access to:

1. The shape of the territory. What are the main workstreams, documents, and decision points?
2. Key people. Who owns what? Who should I meet first?
3. Recent activity. What has happened in this area in the last 90 days?
4. Open questions. What looks unresolved or in flight?

Keep it to one page. I will follow up on specific threads.

What to change: the project or team area, the time window, the level of detail.

9. Personal workflow consultant

Works on: M365 Copilot only | Risk: medium, metadata-first is cleaner

Instead of asking Copilot to do your work, ask it to look at your work patterns and tell you where the friction is. Your calendar, your task list, your meeting load. Only works inside M365 because it needs visibility of your actual activity.

Copilot prompt
Act as a workflow consultant looking at my activity over the last four weeks.

Do not read the content of emails or documents. Look at patterns only:
1. Where am I spending the most time?
2. What meeting patterns look inefficient? (back-to-back scheduling, no prep time, recurring meetings with low attendance).
3. What recurring threads seem to eat time without progressing?
4. Where would a different rhythm, batching, or delegation help?

Give me three changes I could make next week, in priority order.
Data handling note. Starting with metadata-only is a cleaner privacy posture. Confirm your agency has approved this pattern before widening it.

What to change: the time window, whether to look at content or metadata only.

Part three: the bottleneck audit

One of the highest-value uses of Copilot is not any single task. It is a structured think. Coach yourself through identifying where your week actually loses hours, then design small interventions.

This works on any tier because you describe tasks at a conceptual level. No document contents, no stakeholder names, no sensitive detail.

Bottleneck audit prompt

Copilot prompt
Act as a workflow design coach. I want to do a bottleneck audit on my week.

Ask me questions to identify:
1. The three to five tasks that consistently take longer than they should.
2. For each, what makes them slow (information gathering, coordination, rework, approvals, context-switching).
3. Whether the bottleneck is in the task itself, the inputs, or the handoffs.

Once we have the picture, help me design three interventions. For each, tell me:
- What would change.
- What I would need to set up to make it work.
- Where AI tools could help, and where they could not.

Ask me one question at a time. Do not move on until I have answered.
Why it is low risk. You control what you tell it. Keep descriptions at the level of "I spend too long reconciling information across three systems" rather than naming the systems, the data, or the stakeholders.

What to change: the scope (your week, a specific project, a workflow), and whether you want it to push hard or stay gentle.

Part four: scheduled and recurring tasks

Where your licence supports scheduled prompts or agent-style workflows (confirm with your ICT team, Microsoft updates this regularly), build these for compound benefit.

Weekly portfolio wrap

Fridays, 4pm | Public sources only

What happened in your policy area this week, and what is coming up next week from public signals.

Copilot prompt
Scan publicly available Australian sources for [portfolio area] from the last seven days and the next seven days.

Past week: three to five developments with implications for my work.
Coming week: scheduled public events (committee hearings, consultation closures, media events, legislation introductions).

Plain English, one page, bullet points.

International jurisdictional scan

Monthly | Public sources only

What comparable jurisdictions (UK, NZ, Canada, specific EU countries) have released on a topic. Useful for policy officers who need to know what is happening elsewhere.

Copilot prompt
Scan official government and reputable policy research sources in [UK, NZ, Canada, and the Netherlands] for developments on [topic] in the last 30 days.

For each jurisdiction, return: what was announced or released, the publishing body, and a one-line relevance note for an Australian policy context.

Include links. Flag anything where the source is unofficial or the translation is uncertain.

Treat recurring output as a first pass

Scheduled tasks consume public information and produce outputs to you. Before relying on any recurring output, spot-check accuracy. Verify before it informs actual advice.

For APS 6 to EL1. Building AI capability in a team.

For managers and coaches

Your people’s caution about AI is an asset, not a blocker. The biggest risk in AI adoption is not that teams move slowly. It is that they move fast on the wrong things.

This section is for managers, team leaders, and coaches who want to build real, durable AI capability without cutting corners on data handling.

Three things to avoid

  • Do not mandate AI use as a productivity measure. It pushes nervous people to take shortcuts. Shortcuts are where the incidents happen.
  • Do not signal urgency. Your team reads your signals. If you communicate that they are behind, or that the agency is being left behind, people will put content into tools they should not be using.
  • Do not get ahead of what your agency has approved. Your team’s risk appetite should not exceed your agency’s formal position. Check with ICT, security, privacy, and AI governance before you endorse any specific tool or workflow.

Three things to do instead

  • Model the safe use case yourself. Use Copilot visibly on things that touch no agency information. Learning a new policy area. Scanning public media on your portfolio. When your team sees you working in the safe lane, the safe lane becomes the norm.
  • Separate capability building from production work. Carve out dedicated time for people to build confidence on non-work use cases. An hour a fortnight is enough to move the dial. People who have played with the tools at low stakes make better decisions about higher-stakes use.
  • Remove real blockers, not imagined ones. Licensing? ICT approvals? Lack of clarity on what is approved? Those are the fixable problems. Fix those. Do not try to fix “my team is slow to adopt AI” as if it is a cultural problem. Usually it is not.

What good AI leadership sounds like in the APS

Quieter than the hype version.

It sounds like: “Here is what we know is approved. Here is what we are checking on. Here is the lane we work in until that changes. Come to me if something is unclear before you act on it.”

It does not sound like: “We need to move faster.” Or: “Everyone else is doing it.” Or: “Just try it and see.”

A prompt to try this week

Model the safe lane yourself. This media scanning prompt uses only public information, so it works on any Copilot tier. Run it on your portfolio, share the output with your team, and let them see how you use it.

Media scanning prompt

Copilot prompt
Act as a seasoned Australian Public Service media monitor. You are good at spotting what matters to [portfolio area] at [agency], and translating it for a senior audience.

Scan publicly available Australian sources from the last 24 hours. Include ministerial media releases, Hansard, Senate committee announcements, peak body statements, AAP, ABC, The Guardian, and the major state and territory newspapers.

Return a brief in three parts:
1. What happened. Three to five items, plain Australian English, to APS writing standards.
2. Why it matters for [portfolio area] at [agency]. One line of implication per item.
3. Watch items. Anything a policy officer, an EL1, or an SES Band 2 should have on their radar today and over the next 7 days.

Keep the whole brief under 400 words. Flag anything where source reliability is uncertain. If you are not sure a claim is accurate, say so, do not present it as fact.

What to change: the portfolio area, the jurisdictions, your trusted sources, the length. For a weekly version, ask for a Friday wrap instead.

Questions worth running past your agency’s AI governance body

  • Which Copilot tier does the agency have licences for, and which roles have access?
  • What use cases has the agency explicitly approved?
  • What is the escalation path if a team member is unsure whether a use is in scope?
  • What is the incident reporting process if something goes wrong?
  • Is there an internal community of practice or champion network for AI use?

If the answers are not yet clear, that is itself a useful finding. Your team cannot move confidently on ambiguous ground.

For SES Band 1 to 2. Setting tone across a cluster.

For SES leaders

The pressure is on APS leaders to help their people engage with AI. Everyone is getting the same generic advice: give it a go, but be careful. This guide tries to bridge that gap in a pragmatic and accessible way, noting that all agencies have set up different processes, risk appetites, and tool licences.

People being cautious is not a bad thing

Workforce caution on AI is a leading indicator of maturity and demonstrates that your people appreciate the gravity of the risk. At the moment, the challenge is getting them to start. In a few months, that risk has the potential to shift to people becoming overly confident and not exercising the same due diligence (read: Dunning-Kruger effect).

This guide starts with useful prompts and advice on how to use them. In the coming weeks, we are hoping to evolve this to look more to the future: how we keep people talking about how they are actively managing risks in their AI use, so that increased use does not concurrently grow risk.

We would love to have people contribute. If you have a great use case to work through, or have had a sticky hypothetical escalated, let us know. The more we collectively learn now, the less likely the policy will be rigour tested in the courts.

Some useful tips for leaders

Each time you communicate to the team about AI use, be clear about:

  1. Scope and tool use. Which tool and licence tier is available to which roles, and what is formally approved for each.
  2. Sequencing. What needs to be in place (licensing, assurance, governance pathway, incident reporting) before you encourage broader use. Show people where they can confidently use the tools available.
  3. Escalation and exploring the grey. Work with your Directors and teams to set up opportunities for teams to explore the grey areas and what this means for them.

Where you can demonstrate active leadership

  • Show your work, the wins and the failures (especially the failures). Use Copilot on public information in your own work, in public. Use the media scan prompt provided on this site and share it with your group, branch, or division. Explicitly share the prompt and that it was sourced via AI. Share a weekly whoopsie: where did you try and “fail”, what happened, and how did you resolve it? Celebrate failures (and appropriate handling) as loudly as the successes.
  • Fund capability building beyond “what to click when”. Training in the tech is not the priority (honestly, the tech is largely straightforward) and prompting skills are rapidly honed. Look for training that shows how to apply the AI Impact Assessments, and does not offer silver bullets. Assessing how to use these tools is hard, and it should be. Each case brings nuance and needs applied critical thinking. The Copilot trial showed that participants who went to three or more training sessions were 28% more confident in their use of the tools.
  • Intentionally provide time for working through grey areas. Each team should set aside at least one hour a fortnight to pick up a use case and run it through the responsible AI policy, the AI Impact Assessment, and other tools. Take the AI out of the equation: how do they manage a risk today? How do they know their team has rigorously researched before writing a position paper? How could you get that same level of assurance from AI?

A prompt for your own strategic thinking

Conceptual. No agency data required. Works on any Copilot tier.

SES strategic prompt

Copilot prompt
Act as a strategic advisor to a senior executive in the Australian Public Service. I lead [portfolio or function] with a cluster of roughly [size] people across [APS levels].

Before I communicate a position on AI capability to my people, help me stress-test my thinking in three areas:

1. Governance. What decisions do I need to have made (or confirmed with our AI governance body) before I communicate a position to my cluster?
2. Signal. What tone and language will my SLT and my team read from the position I take? Where am I at risk of sending a signal I did not intend?
3. Sequencing. What should be in place before I encourage broader use, and how do I sequence investment across capability, licensing, and assurance?

Ask me one question at a time. Use Australian Public Service context. Do not assume I have already resolved any of the above. Push back where my reasoning is thin.

What to change: portfolio description, cluster size, APS level mix, and how hard you want the advisor to push. For a softer version, ask it to coach rather than pressure-test.

For everyone

When to check, and when to escalate

Go to your agency’s privacy officer, security advisor, legal team, or AI governance body before you proceed if any of the following apply.

  • You are not sure whether a data category you want to use is in scope for your Copilot tier.
  • You are considering a recurring workflow that touches any non-public information.
  • You are working with personal information under the Privacy Act 1988 and want to use any AI tool in the processing.
  • You are asked to use AI in a way that feels off, even if you cannot name why.

Caution is professional good judgement. This is not about removing it. It is about finding the wide space where AI helps you think, scan, and learn, without asking it to touch information it should not see.

The Protective Security Policy Framework, the Information Security Manual, the Privacy Act 1988, and DTA guidance on AI use in government all update regularly. Microsoft also updates Copilot features, licensing behaviour, and data handling rules regularly. Confirm anything described here with your ICT team or current Microsoft documentation before you build a workflow on it.

Note from the author

Why I created this

I spend a lot of time with public servants who understand and are on board with the government’s position to increase AI use, and yet still don’t feel confident about how to enact the policy. Essentially, they’re worried about doing the wrong thing with AI and potentially creating another RoboDebt.

While it’s heartening to be reminded about how much our public service truly does care, it does go to show that a lot of advice hasn’t been grounded in individuals’ day to day. I thought I was helping to move the dial with my Deficit-first Framework, but after 15+ years of working in implementing technical change in government, I’ve learnt people don’t need frameworks; they need direction and contextualised help.

The majority of people I’ve spoken to have read the policies and have a broad understanding that AI can be helpful. The issue is, for a whole bunch of reasons, they can’t see how it will be helpful at work for them.

In creating this, I’ve tried to think of case studies that surface relevant and trustworthy information from publicly available sources in a way that makes the most sense to the individual. Most advice tells people to start with data inside the system, which makes it inherently higher risk. These use cases are designed to give you a meaningful result from the get go.

I will keep updating this as the tools and the rules evolve. If something is out of date, or if you have a use case that deserves a place, please let me know. Also, if you have had a win, I’d love to showcase it. Please send me an email: hello@theunordinary.co.

Sian Rinaldi Founder, The Unordinary. AI adoption coach, human-centred designer, and long-time collaborator with the Australian public sector.

sian@theunordinary.co  ·  theunordinary.co

What this is not

A few important notes

  • This is not an official government resource. It references Australian government frameworks. It does not represent them.
  • This is not legal, privacy, or security advice. It is a practitioner guide. For advice that governs your conduct, talk to your agency’s legal, privacy, and security teams.
  • Nothing here is a substitute for your agency’s AI governance body or ICT team. Where this guide and your agency’s position differ, follow your agency.
  • Microsoft updates Copilot features, data handling rules, and licensing regularly. What is true today may not be true in three months. Verify with your ICT team before you build a workflow on it.

Questions, corrections, or suggestions are welcome at hello@theunordinary.co.