Skip to content

Task Workflows

There are several types of tasks, and each has its own general type of workflow. Each task is also given a "story point", which is an approximate estimation of the amount of effort and time required to complete the task. However, the time required varies depending on the amount of context the assignee has.

All tasks share at least the following workflow:

  1. Create the task. Early in the process, tasks can have a sparser description than required by the task description standard - for example to create a placeholder task to include in the sprint before the task creation cut-off.
  2. Refine the task. This means to improve its description to match the task description standard, determine whether it's appropriate for the sprint, and how many story points it probably requires. To ease planning, this should be done as early as possible - but in any case, no later than the end of the Friday before the sprint start.
  3. Assign the task to someone (or yourself), including Assignee and Reviewer. Address their questions and comments. Like 2, this should be done as early as possible so that they will have the opportunity to shuffle tasks around as needed.
  4. Do the task. How each task is generally "done" depends on its type; see more in the sections below.
  5. Review the task (internal/upstream). How one reviews a task will tend to mirror how one does it. For example, discovery tasks generally don't require writing test code, so a review would also generally not require testing anything step by step, as it would be the case for a general task.
  6. Close the task.

There are eight columns on the sprint board which a ticket will move through as the task is completed. These are detailed under Task statuses.

Task description standard

Tasks are required to follow the following format template, with all the elements included:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
h3. Story

"As a < type of user >, I want < some goal > so that < some reason >"

h3. Full description

_A full description of the context and goals for the work (the *why*), as well as relevant non-technical product information_

h3. Completion criteria

* _A list that delimits the scope precisely - if anything is needed for the work but not list there or in the handbook, it is scope extension and should go in a separate task. This section replaces the "Acceptance criteria" field, which is deprecated, doesn't need to be filled, and will be removed in the future._
* Automated testing must cover common paths in behavioral specification.

h4. Behavioral specifications

_(optional) Describe the various actions users can perform in the software and the expected response of the software. Include the common happy paths, and paths which result in errors._

* "When the user <action>, <response>."

h3. Documentation updates & improvements criteria

* _A list of specific documentation requirements, to ensure constant attention and iterative improvements to documentation, or the mention "Left to the assignee’s appreciation"_

h3. Relevant repositories

* _Include a list of repositories that are relevant to the task, including the branch name if required._
* _[Optional] Include links to the related code sections._

h3. Review timeline

* PR to be sent for review by <date>
* First PR review to be completed by <date>
* _[Optional]_ Draft/WIP PR sent for review by <date>

Note: For tasks where the author and the assignee are the same person, it may be tempting to make an exception, and leave some of these fields out, or incomplete. However, note that this will prevent someone else from taking the task if some work need to be shifted around during sprint preparation, and even when you write a task for yourself, you and your reviewer will still benefit from a more careful description of its scope. It will make it easier to estimate, to plan, and to decide when something is part of the scope or not during implementation, thus reducing the chances of bad surprises and spillovers.

Note: For epics, use the epic description template instead of this one.

Task Types

General Tasks

Points Meaning Approx. Time
1 point Trivial task: We know the codebase, there's little or no code to write, there's little or no need for code review, there's no back-and-forth interaction with outside teams, and no deployments.

Example: Account Verification emails not sending for HUIT instance (OC-2780)1
2-4h
2 points Small task: The change required is simple, or the reason for the bug is evident.

There may be a deployment or we may need to interact with an external team (e.g. edX code review), but those are expected to go quickly and smoothly. There will be perhaps one new unit test.

Example: UI issue in Course Info Overlay (OC-2792)1
4-8h
3 points Moderate task: some uncertainty so investigation required; will involve some new code, maybe a couple new tests, and an external review.

Example: Send email to unregistered students who request password reset (OC-2857)1
8-16h
5 points The change is significant and would probably take more than a day; or, the change seems small, but most people on our team are not familiar with this codebase and some learning or interaction with an external team will definitely be required. The code will require quite a few new tests.

Example: Shared RabbitMQ support for OpenCraft IM (OC-1719)1
16-32h
8 points The task would take anyone a significant amount of time to implement, and then will require a careful review, which will likely involve back-and-forth and more coding. There is significant risk and/or novelty. We will need to interact with external teams. The upstream review is likely to come back with changes. The code will need several types of test, including Selenium integration tests. The best approach to take is not yet known or may be controversial.

Example: Display Course Information on Landing page (OC-2617)1
32h+
13 points The task is very large, novel, which should be split up across sprints as necessary, may require discovery at multiple steps in the implementation, will require a significant amount of test code and careful manual testing, both internally and from other teams, and is generally very experimental, even for non-technical stakeholders.

Example: Build Programs Landing Page (OC-2963)1
40h+

Discovery Tasks

Discovery tasks are timeboxed, except for specific exceptions, so general time approximations don't apply.

Points Meaning
1 point Trivial discovery: We've done an estimation/discovery very similar before (possibly for the same client), the discovery involves material that'd be familiar to any one of us, and a discovery document most likely isn't required.

Example: Estimate PHP upgrade (OC-2881)1
2 points Small discovery: The discovery is relatively simple, or we already have a pretty good idea of what the estimations will be. A discovery document most likely isn't required, and most communication regarding it can happen right on the ticket.

Example: Ooyala Closed Captions (OC-1899)1
3 points Moderate discovery: some uncertainty so the discovery will require a careful review. Likely that a discovery document is required.

Example: Ginkgo Upgrade (OC-2762)1
5 points The discovery will require a deep dive and involves several different moving pieces to consider. There's plenty of uncertainty so a lot of investigation and a careful review will be required. A discovery document (possibly multiple) is required to effectively communicate the results.

Example: Cross-cloud Analytics for Cloudera (OC-2109)1
8 points A very open & novel discovery that has all the traits of a 5-point discovery, but also will require contact with other teams, a detailed discovery document, diagrams, possibly even slides/videos to effectively communicate with external stakeholders.

Example: OpenStack Production Support (OC-1046)1

Prototype Tasks

A prototype task is just like a general task, except the code does not have to include tests, meet all a11y/i18n requirements, nor be merged. The code should still be able to demonstrate the new functionality though. Usually after the prototype is complete, the task assignee will create one or more additional stories for the completion of the work.

Points Meaning
1 point Trivial prototype: We're very familiar with what needs to be changed or added to get the prototype ready, no tests should be necessary, and it should take very little effort to get it ready.

Example: "Prototype: Client requests site's theme to be a light blue"
2 points Small task: The prototype is pretty simple or straightforward, and is not considerably novel. The additions required will not be significant, and writing tests will most likely not be necessary.

Example: Auto-advance to next unit after a video (OC-2594)1
3 points Moderate task: some uncertainty so investigation required; may possibly require tests.

Example: Course Blocks API student_view_data for step builder (OC-2809)1
5 points A pretty large prototype for which the solution or preferred path is not obvious and some fresh decision making will have to be done. The prototype requires significant domain knowledge.

Example: Open edX on OpenStack Continuous Integration (CI) (OC-2167)1
8 points The prototype is large, novel, and will require a detailed discovery that'll need to be carefully communicated with the reviewer. A lot of domain knowledge will be required, and some may even need to be invented.

Example: DiscussionXBlock Prototype (OC-1630)1

Trial Project Tasks

We use trial projects to evaluate candidates who pass our interviews. Tasks that can be used as trial project should be labeled as trial-project in Jira.

A trial project should have following characteristics:

  • A trial proejct ticket should take ~20h to complete for a candidate.
  • The task should be complex enough to judge candidates technical ability.
  • An ideal Trial Project task should be a combination of backend, frontend and DevOps related work.
  • A core team member should be the reviewer of the task.
  • Client tasks
    • Low priority client tasks that have enough buffer between deadline and the start of the recruitment round (at least 2 sprints).

These tasks should also contain at least the following information:

  • Affected code
    • Provide a link to the repository and target branches for each PR you expect this ticket to produce. For instance: https://github.com/ORG/REPO/tree/BRANCH
  • Preparation
    • Indicate what devstack setup or other preparatory steps are required to complete the task.
  • Risk factors
    • List any uncertainties, ambiguities, or risk factors that you can identify for this task.
  • Acceptance criteria
    • List these in the description, or as checklist items on the task if you prefer.
  • Related tasks
    • Link to any discovery or preliminary tasks, or to the epic for more context on this work.
  • Estimate
    • Ensure that the Original Estimate and Remaining Estimate fields are filled in.

Ensure that the 20 hour timebox limit is clearly stated in the ticket description.

In case the trial project is a client ticket and the candidate can't complete or get stuck for some reason this rules applies here as well.

Creating a task

The Create button in JIRA will open a form where you need to input some basic information. Some fields are self-explanatory, others take a while to get used to. In particular:

  • epic: this is always expected, because tasks should belong to an epic in order not to be forgotten. Find a related one and ping someone (e.g. that epic owner) if you need to verify it. The epic needs to be open (in development)
  • account: this is for accounting and billing. It takes some time to get a feeling for which account is right for each type of work (e.g. support vs. maintenance, bugs vs. upgrades). Try to use the same account as in a similar task and then ping someone (e.g. epic owner) if you need a double-check. The most important decision is whether it's an internal account or a client account. There are also cell budgets that limit the amount of internal work.
  • summary: it's a short title
  • description: the most important field. We have a template that you can use. As a reporter you should include all the information in the template
  • story type (story, epic, bug): use story by default, or bug if the task looks like a bug fix. We treat both in the same way but they get different icons. Use epic if you're creating new projects (see epic management)
  • story points: leave this blank and then we'll include this task in an estimation session. If the task is trivial you can estimate it yourself
  • original estimate and remaining estimate: these can be decided later by the assignee. The original estimate is set once and doesn't change, whereas the remaining estimate is dynamic. They will start at the same value but the remaining estimate will decrease as time is logged, and you may also manually adjust it. If the estimate for this task has been shared with the client (e.g., the task came from a discovery), then use that estimate here. The assignee might still change this number, but it gives them an idea of where to start
  • assignee and reviewer: these can be decided later
  • sprint: unless you're sure about when will we do the task, leave this blank and ping the epic owner to schedule it
  • due date, labels, upstream reviewer, time budget, affects version, fix version, flagged, checklist: don't worry about these during task creation. We may use some of these fields later as part of epic and sprint management, but it's fine to leave them blank when you create tasks

Task Statuses

Please see below for the ticket statuses we use in our JIRA tracker, and what they each tend to represent.

Backlog

Most tasks start here.

The tasks at the top of the backlog are the ones with the highest priority. Epic owners are usually the ones who create and prioritize the tickets according to the client's needs; except during discovery tasks for large epics, when the person who does the discovery will create the tasks. In general, if you have an idea for a new task, discuss it either:

  • With the epic owner of a related epic (who might have time budget available to work on it),
  • With your cell (which could allocate some non-billable time, depending on its priorities),
  • On the forum if this is meant to be a larger initiative.

This sprint

This is the list of tasks assigned to individuals and expected to be finished by the end of the sprint.

Once you start working on a task, you should move it to the next column.

In progress

Tasks that are currently being worked on. In this column, the next steps are the assignee's. Use this column if the task needs any further work, including if the code reviews are done and the code is ready to be merged/deployed.

While you work on the task, log the time you worked on the task progressively ("Log work"). You can also push to your remote branch regularly.

Once you are finished, create a Pull Request and move the task to the next column. Add the Pull Request to the task by clicking More > Link > Web Link, and pasting the URL of the Pull Request. (If you have many PRs, add them to the description or use a checklist, so that you can indicate the status of each PR by marking them with a check or strikethrough as they are merged, which cannot be done when using a simple link. See OC-10641 as an example.) Also, be sure to notify the person who will review your work. (Mentioning their username in a comment on the task is usually enough.)

Internal review

This column is for tasks that are waiting for a review from other OpenCraft developer(s), or where the next step is to be done by the reviewer.

The reviewer will look over the code and test it, leaving feedback on the pull request. If the code needs work, the reviewer should move it back to "In Progress." Once all of the reviewers' concerns have been addressed, the reviewer should give a ":+1:" (thumbs up) comment to indicate that they've approved the PR. The task can then proceed back to "In Progress" for the assignee to either merge it or ask upstream (e.g. edX) to do a second review.

Note: If the corrections to be made are fairly trivial, the reviewer should give a conditional +1, e.g. say "+1 if you fix the minor issues a and b." That way, the author won't be blocked waiting for a trivial follow-up review.

If the only pull requests on the issue will be reviewed by a core contributor from our team (see next section), this internal review can be skipped.

Core contributor review

A specific type of internal review is "core contributor review" - this is when a pull request against an upstream repository like edx-platform is reviewed by one of the core contributors on our team. This uses the same column as "Internal review" on the sprint board, but is also tracked using a separate ticket just for the review (see "How to request Core Contributor review"). (If the Core Committer is not part of OpenCraft, just put it under "External Review" instead.)

The assignee can choose to have upstream PRs reviewed by both the assigned Reviewer 1 and the assigned Core Contributor, or just by the Core Contributor.

If the upstream pull request is introducing a new feature or user-visible change, there is usually a required review from edX product (who give a non-technical approval for new features) as well, after the review by the core contributor. While waiting for the edX product review, the ticket should be in the "External Review/Blocked" status.

External Review/Blocked

This column is for when the next steps need to be done by someone outside of OpenCraft (e.g. we're waiting for a review from edX), and they have been pinged. This column can also indicate that the ticket is blocked by another one - in that case, the blocking ticket should be "linked" as a blocker in Jira.

The assignee should carefully follow the progress of the external review or blocker. If no progress is made for some time, the assignee should send a polite reminder to the external person/people we are waiting for.

If the one who does the merge is from within the OpenCraft team, he can move the task forward. Otherwise, you should regularly check if the PR is merged, and move it yourself.

If you (the assignee) do not expect any progress on the ticket in the upcoming sprint, you should move the ticket to the "Long External Review/Blocked" sprint, so that it won't be cluttering up the sprint board and so that your commitments for the upcoming sprint are more certain.

Merged

All PRs from the task (including upstream PRs) have been merged. The assignee is now responsible for deploying the code and notifying the client that the work has been done ("delivering").

When you move a task to Merged, JIRA will open a popup with many fields. You don't need to enter more information, but you can use the Resolution field to explain how or why the task was closed.

Deployed & Delivered

Once all PRs have been merged, and the code has been deployed, and the client has been notified that the work is done (including updating the client's Jira or Trello tickets if applicable), the assignee should move the ticket to Deployed & Delivered.

This column indicates that the task is ready for the sprint manager to check and close it.

Important: All tasks should get to "Deployed & Delivered" (or "Done") or be in "External Review" before the end of the sprint. Any other status is considered a spillover, which is important to avoid.

Done

This column indicates that the ticket has been reviewed by the cell's Sprint Manager, who will double-check the following criteria before moving the ticket to this column:

  • All code which could be upstreamed has been upstreamed, or was developed as a plugin using a stable public API.
  • All pull requests are merged.
  • The code has been deployed.
  • The client has been notified, and the corresponding ticket on their Trello/Jira board (if any) has been updated.

Note

The client does not need to have signed off on the work before we consider it done. If the client finds a bug, simply move the ticket back to "In Progress" if it's still in the same sprint, or create a bugfix ticket if the sprint is over. If the client requests additional changes/features, create a new ticket for the next sprint.

Recurring

Recurring tasks are part of every sprint.

These are tasks that happen on a regular basis. When you see a task in the "Recurring" column, then you expect to see work on this task and a time budget allocated for it in every sprint. These kinds of tasks are often helpful in cases like mentoring new joiners, team meetings, etc.

Just like regular tasks, recurring tasks start in the "Backlog" and JIRA puts them in "This sprint" after they get pulled into a sprint for the first time. The assignee of a recurring task should move the task from "This sprint" straight to "Recurring" at the beginning of the first sprint that includes the task. If the work belonging to a recurring task ends (e.g. because we finished the project that it belongs to), the assignee should move it to "Merged" before the end of the current sprint.

Long External Review/Blocked

Tasks in this category are not part of the current sprint.

This category is for tasks that are waiting for an external review or some other external requirement, and are not expected to be unblocked before the end of the current sprint. If any tasks in this category are assigned to you, you should review them once a week to see if you need to ping the external party to remind them to review/unblock it, or if you are ready to pull it back into a sprint.

JIRA summary

These are the main views you'll use:

  • the backlog view (where you see 1 task per row), while planning sprints. Find the link for your cell
  • the weekly sprint board (where you see columns), while working on the current sprint. Find the link for your cell
  • tempo views, for time logging. Refer to these instructions
  • estimation session (you'll receive a link)
  • and of course the individual task view. Use the Edit button to modify any task field.

Colors have meanings in some contexts but are arbitrary in others.

  • In the backlog view (where you see 1 task per row), a task's left margin can be yellow, green or other colors. Refer to these explanations.
  • In the backlog view you'll also see epic names at the right. Each epic is shown in a different color, that we assign arbitrarily.

Task icons have meanings: there's an icon per task type, and an icon per task priority.

JIRA is often slow. Refreshing the backlog page through the browser's refresh button can take a long time. You can do a faster refresh of the current view by just clicking the backlog button again (in the left bar).

We don't use all JIRA fields and you can ignore the unneeded ones. See the list of basic fields to use in tasks.

You can also ignore some inconsistencies, like:

  • the External review / Blocker status is shown as Upstream PR in the workflow
  • a subtask shows up with the type Technical task instead of the usual Story
  • flags are called impediments in some places

UX First Approach

UX first approach focuses on the following points:

  • UX review & test of all changes part of code review before any merge.
  • All discovery/design documents for changes that are user-facing or impacting the user experience.
  • Small development iterations (1 production release per sprint, from the very first sprint).
  • Systematic user testing & feedback analysis (internal & external users) of all new features.

Process for UX design & mockup

Internal epics requiring UX design are done iteratively. First, a preliminary discovery is done by the epic owner, to determine the scope and requirements of the epic, and prepare:

  • An initial brief for UI/UX Designers. The goal is to arrive with requirements that they can work with, i.e. provide needs, goals and scope. It doesn’t mean that we should have a finalized solution to meet those needs, or that we shouldn’t question some of these requirements -- part of the next step is for UI/UX Designers to help define it. But we should clearly state what we know we want, and what we need help to figure out.

  • Technical basis for the work (technologies to use)

  • Monthly commitment on the project (in developer hours/month) and a deadline for a MVP v1 release. Since we work iteratively, we don’t have a fixed scope at this point for the MVP, but we decide when we’ll launch it (and switch to it ourselves, too). We should always have a soft launch because we need users early on for proper feedback cycle. The deadline is more for when we start doing a larger marketing effort.

UI/UX Discovery

A UX discovery is done by UI/UX Designers, to evaluate the brief, help to refine the scope/requirements, define a suggested approach & plan for the UX work that will be necessary to get to the finalized mockups, and an estimate of the monthly UX time commitment needed to complete the UX work being planned. As much as possible, the UX work should be planned in stages, to allow the development to happen iteratively:

  • Each UX task should target preparing work sized to a single sprint of the developers (basic proof of concept at first, designing and implementing a small portion of a feature each time, etc.);

  • Plan to schedule further UX tasks iteratively - while developers work on the outcome of the first UX task, work on the second UX task to prepare the developers’ next sprint, and so on. Don't plan too much in advance, to allow to adapt progressively. We become invested in work that we have already done -- and becoming too attached to designs or features early on is a real risk, and can divert from the user feedback.

  • Developers (& Xavier Antoviaque) are reviewers on UX tasks, and UI/UX Designers are 2nd reviewers on developer tasks - this way developers can provide feedback on what can be achieved during their next sprint. UI/UX Designers write the stories for the developers as part of sprint planning, and the epic owner refine/split them as needed to take into account technical constraints. Although, UI/UX Designers are 2nd reviewers on the ticket but the changes needs their approval before getting merged.

  • UX time requirement estimates are determined (as hours/month), and availability from UI/UX Designer is checked, and discussed as part of the epic planning management. When sprint planning managers prepare their cell availability, they also check with UI/UX Designers the amount of UX hours the cell will need to progress on its epics. during the initial discovery of each epic.

  • UX designers (& Xavier Antoviaque) are the product owners of the work being delivered, having the same role for internal projects as a client.

  • For each feature or improvement, user testing is performed the following sprint, and user interviews performed continuously with 1-2 user each sprint.

Process for UX reviews of development tasks

UI/UX Designers review & test changes on the last Thu/Fri of each sprint:

  • Fixes part of the scope of the ticket being reviewed are done immediately, and UI/UX Designers review these small fixes once they are deployed.
  • Fixes/tweaks that extend the scope of the ticket being reviewed are compiled into one or several tickets, created by UI/UX Designers and scheduled for the following sprint.
  • Developers work on the new tweak/feedback tickets during the next sprint.
  • UI/UX Designers review small tweaks/ fixes once they are deployed.
  • UI/UX Designers review larger tweaks at the end of the sprint (along with any new changes implemented during the sprint)

UI/UX Reviews

When working on tickets that require UI/UX reviews there are few extra points one needs to consider:

  • At the start of the sprint make sure to ping the UI/UX reviewer about the ticket coming for review this sprint. It helps the UI/UX reviewers to plan for the review early on.

  • The UI/UX reviewer should go through all the tickets that have to be reviewed at the start of the sprint and plan the time accordingly. The reviewer should update about any kind of concern like the lack of availability or planned time-off.

  • Remember that UI/UX reviewers are not developers, so when writing task descriptions, try to avoid any language that a non-technical person might not understand.

  • A good idea is to constantly leave updates on the ticket so that the reviewers would have an idea when the ticket is expected for review. Since UI/UX reviewers work with multiple clients, this would definitely help to prioritize the work.

  • Make sure to provide a way to review the changes by the UI/UX reviewers, this can be deploying the changes to the stage environment or providing with a screenshot, it will vary with condition and needs.

  • Once the changes are submitted for review, the UI/UX reviewers should make sure to leave a comment about when this can be reviewed.

  • A ticket will be considered as spillover if either code reviews or UI/UX reviews are missing or any comments are not addressed. While doing UI/UX review, one has to be aware of the scope of the ticket and if anything crosses the scope it should be done in a follow-up ticket and not in the current ticket.

Splitting Tickets

Occasionally, you may run across a ticket which appears well-defined, but which has made some incorrect assumptions that can lead to a significant increase in scope. One example of this might be a task that requires implementation of a feature in a codebase we haven't worked with before.

As you finish implementation, you begin to start on tests, as is good practice. In the process, you discover that this codebase does not have any kind of testing framework and that it is set up poorly enough that adding one would significantly alter the scope, requiring you to change how its devops code is handled.

Following our commitment to make sure code is tested, we do not ignore the need for tests. But you should create a new ticket that covers doing the work to allow testing, and adds tests for your new feature as a start. Once you've created this ticket, ping your epic owner to let them know of the problem, and schedule it into your next sprint unless they advise you otherwise.


  1. Private for OpenCraft Employees 


Last update: 2023-10-16