There are several types of tasks, and each has its own general type of workflow. Each task is also given a "story point", which is an approximate estimation of the amount of effort and time required to complete the task. However, the time required varies depending on the amount of context the assignee has.
All tasks share at least the following workflow:
- Create the task.
- Refine the task. This means to determine whether it's appropriate for the sprint, who should take it, who should review it, and how many story points it probably requires.
- Do the task. How each task is generally "done" depends on its type; see more in the sections below.
- Review the task (internal/upstream). How one reviews a task will tend to mirror how one does it. For example, discovery tasks generally don't require writing test code, so a review would also generally not require testing anything step by step, as it would be the case for a general task.
- Close the task.
There are eight columns on the sprint board which a ticket will move through as the task is completed. These are detailed under Task statuses.
|1 point||Trivial task: We know the codebase, there's little or no code to write, there's little or no need for code review, there's no back-and-forth interaction with outside teams, and no deployments.
Example: Account Verification emails not sending for HUIT instance (OC-2780)
|2 points||Small task: The change required is simple, or the reason for the bug is evident.
There may be a deployment or we may need to interact with an external team (e.g. edX code review), but those are expected to go quickly and smoothly. There will be perhaps one new unit test.
Example: UI issue in Course Info Overlay (OC-2792)
|3 points||Moderate task: some uncertainty so investigation required; will involve some new code, maybe a couple new tests, and an external review.
Example: Send email to unregistered students who request password reset (OC-2857)
|5 points||The change is significant and would probably take more than a day; or, the change seems small, but most people on our team are not familiar with this codebase and some learning or interaction with an external team will definitely be required. The code will require quite a few new tests.
Example: Shared RabbitMQ support for OpenCraft IM (OC-1719)
|8 points||The task would take anyone a significant amount of time to implement, and then will require a careful review, which will likely involve back-and-forth and more coding. There is significant risk and/or novelty. We will need to interact with external teams. The upstream review is likely to come back with changes. The code will need several types of test, including Selenium integration tests. The best approach to take is not yet known or may be controversial.
Example: Display Course Information on Landing page (OC-2617)
|13 points||The task is very large, novel, which should be split up across sprints as necessary, may require discovery at multiple steps in the implementation, will require a significant amount of test code and careful manual testing, both internally and from other teams, and is generally very experimental, even for non-technical stakeholders.
Example: Build Programs Landing Page (OC-2963)
Discovery tasks are timeboxed, except for specific exceptions, so general time approximations don't apply.
|1 point||Trivial discovery: We've done an estimation/discovery very similar before (possibly for the same client), the discovery involves material that'd be familiar to any one of us, and a discovery document most likely isn't required.
Example: Estimate PHP upgrade (OC-2881)
|2 points||Small discovery: The discovery is relatively simple, or we already have a pretty good idea of what the estimations will be. A discovery document most likely isn't required, and most communication regarding it can happen right on the ticket.
Example: Ooyala Closed Captions (OC-1899)
|3 points||Moderate discovery: some uncertainty so the discovery will require a careful review. Likely that a discovery document is required.
Example: Ginkgo Upgrade (OC-2762)
|5 points||The discovery will require a deep dive and involves several different moving pieces to consider. There's plenty of uncertainty so a lot of investigation and a careful review will be required. A discovery document (possibly multiple) is required to effectively communicate the results.
Example: Cross-cloud Analytics for Cloudera (OC-2109)
|8 points||A very open & novel discovery that has all the traits of a 5-point discovery, but also will require contact with other teams, a detailed discovery document, diagrams, possibly even slides/videos to effectively communicate with external stakeholders.
Example: OpenStack Production Support (OC-1046)
A prototype task is just like a general task, except the code does not have to include tests, meet all a11y/i18n requirements, nor be merged. The code should still be able to demonstrate the new functionality though. Usually after the prototype is complete, the task assignee will create one or more additional stories for the completion of the work.
|1 point||Trivial prototype: We're very familiar with what needs to be changed or added to get the prototype ready, no tests should be necessary, and it should take very little effort to get it ready.
Example: "Prototype: Client requests site's theme to be a light blue"
|2 points||Small task: The prototype is pretty simple or straightforward, and is not considerably novel. The additions required will not be significant, and writing tests will most likely not be necessary.
Example: Auto-advance to next unit after a video (OC-2594)
|3 points||Moderate task: some uncertainty so investigation required; may possibly require tests.
Example: Course Blocks API student_view_data for step builder (OC-2809)
|5 points||A pretty large prototype for which the solution or preferred path is not obvious and some fresh decision making will have to be done. The prototype requires significant domain knowledge.
Example: Open edX on OpenStack Continuous Integration (CI) (OC-2167)
|8 points||The prototype is large, novel, and will require a detailed discovery that'll need to be carefully communicated with the reviewer. A lot of domain knowledge will be required, and some may even need to be invented.
Example: DiscussionXBlock Prototype (OC-1630)
We mark introductory-level tasks as Newcomer Friendly to help newcomers identify tasks that would be suitable for them to pick up during their trial period at OpenCraft.
To mark a task as newcomer-friendly so that its tag is visible in the sprint backlog, we have to set it in the hidden "Fix Version" field and not the "Affects Version" field, which requires a special process in JIRA:
- Open your cell's sprint board, and go to the Backlog view.
- Open the "Versions" panel on the left side of the backlog.
- Drag the issue you want to tag as Newcomer Friendly onto the "Newcomer Friendly" version for your cell.
These tasks should also contain at least the following information:
- Affected code
- Provide a link to the repository and target branches for each PR you expect this ticket to produce. For instance:
- Provide a link to the repository and target branches for each PR you expect this ticket to produce. For instance:
- Indicate what devstack setup or other preparatory steps are required to complete the task.
- Risk factors
- List any uncertainties, ambiguities, or risk factors that you can identify for this task.
- Acceptance criteria
- List these in the description, or as checklist items on the task if you prefer.
- Related tasks
- Link to any discovery or preliminary tasks, or to the epic for more context on this work.
- Ensure that the Original Estimate and Remaining Estimate fields are filled in.
If there is a timebox, ensure that this is clearly stated in the ticket description.
- (optional) Newcomers: please log time in excess of the task estimate on your onboarding task.
Creating a task#
Create button in JIRA will open a form where you need to input some basic information. Some fields are self-explanatory, others take a while to get used to. In particular:
epic: this is always expected, because tasks should belong to an epic in order not to be forgotten. Find a related one and ping someone (e.g. that epic owner) if you need to verify it. The epic needs to be open (in development)
account: this is for accounting and billing. It takes some time to get a feeling for which account is right for each type of work (e.g. support vs. maintenance, bugs vs. upgrades). Try to use the same account as in a similar task and then ping someone (e.g. epic owner) if you need a double-check. The most important decision is whether it's an internal account or a client account. There are also cell bugets that limit the amount of internal work.
summary: it's a short title
description: the most important field. We have a template that you can use. As a reporter you should include all the information in the template
story type(story, epic, bug): use story by default, or bug if the task looks like a bug fix. We treat both in the same way but they get different icons. Use epic if you're creating new projects (see epic management)
story points: leave this blank and then we'll include this task in an estimation session. If the task is trivial you can estimate it yourself
remaining estimate: these can be decided later by the assignee. The original estimate is set once and doesn't change, whereas the remaining estimate is dynamic. They will start at the same value but the remaining estimate will decrease as time is logged, and you may also manually adjust it. If the estimate for this task has been shared with the client (e.g., the task came from a discovery), then use that estimate here. The assignee might still change this number, but it gives them an idea of where to start
reviewer: these can be decided later
sprint: unless you're sure about when will we do the task, leave this blank and ping the epic owner to schedule it
checklist: don't worry about these during task creation. We may use some of these fields later as part of epic and sprint management, but it's fine to leave them blank when you create tasks
Please see below for the ticket statuses we use in our JIRA tracker, and what they each tend to represent.
Most tasks start here.
The tasks at the top of the backlog are the ones with the highest priority. Epic owners are usually the ones who create and prioritize the tickets according to the client's needs; except during discovery tasks for large epics, when the person who does the discovery will create the tasks. In general, if you have an idea for a new task, discuss it either:
- With the epic owner of a related epic (who might have time budget available to work on it),
- With your cell (which could allocate some non-billed time, depending on its priorities),
- On the forum if this is meant to be a larger initiative.
This is the list of tasks assigned to individuals and expected to be finished by the end of the sprint.
Once you start working on a task, you should move it to the next column.
Tasks that are currently being worked on. In this column, the next steps are the assignee's. Use this column if the task needs any further work, including if the code reviews are done and the code is ready to be merged/deployed.
While you work on the task, log the time you worked on the task progressively ("Log work"). You can also push to your remote branch regularly.
Once you are finished, create a Pull Request and move the task to the next column. Add the Pull Request to the task by clicking More > Link > Web Link, and pasting the URL of the Pull Request. (If you have many PRs, add them to the description or use a checklist, so that you can indicate the status of each PR by marking them with a check or strikethrough as they are merged, which cannot be done when using a simple link. See OC-1064 as an example.) Also, be sure to notify the person who will review your work. (Mentioning their username in a comment on the task is usually enough.)
This column is for tasks that are waiting for a review from other OpenCraft developer(s), or where the next step is to be done by the reviewer.
The reviewer will look over the code and test it, leaving feedback on the pull request. If the code needs work, the reviewer should move it back to "In Progress." Once all of the reviewers' concerns have been addressed, the reviewer should give a ":+1:" (thumbs up) comment to indicate that they've approved the PR. The task can then proceed back to "In Progress" for the assignee to either merge it or ask upstream (e.g. edX) to do a second review.
Note: If the corrections to be made are fairly trivial, the reviewer should give a conditional +1, e.g. say "+1 if you fix the minor issues a and b." That way, the author won't be blocked waiting for a trivial follow-up review.
This column is for when the next steps need to be done by someone outside of OpenCraft (e.g. we're waiting for a review from edX), and they have been pinged. This column can also indicate that the ticket is blocked by another one - in that case, the blocking ticket should be "linked" as a blocker in Jira.
The assignee should carefully follow the progress of the external review or blocker. If no progress is made for some time, the assignee should send a polite reminder to the external person/people we are waiting for.
If the one who does the merge is from within the OpenCraft team, he can move the task forward. Otherwise, you should regularly check if the PR is merged, and move it yourself.
If you (the assignee) do not expect any progress on the ticket in the upcoming sprint, you should move the ticket to the "Long External Review/Blocked" sprint, so that it won't be cluttering up the sprint board and so that your commitments for the upcoming sprint are more certain.
All PRs from the task (including upstream PRs) have been merged. The assignee is now responsible for deploying the code and notifying the client that the work has been done ("delivering").
When you move a task to
Merged, JIRA will open a popup with many fields. You don't need to enter more information, but you can use the
Resolution field to explain how or why the task was closed.
Deployed & Delivered#
Once all PRs have been merged, and the code has been deployed, and the client has been notified that the work is done (including updating the client's Jira or Trello tickets if applicable), the assignee should move the ticket to Deployed & Delivered.
This column indicates that the task is ready for the sprint manager to check and close it.
Important: All tasks should get to "Deployed & Delivered" (or "Done") or be in "External Review" before the end of the sprint. Any other status is considered a spillover, which is important to avoid.
Asking for feedback#
When the task is in this column, it is a good time to ask for feedback from the reviewers or others that you have worked with. While this is not mandatory, it's always good to have some feedback and know what went well and what could be improved, especially on more complex tasks, or tasks that you've had some issues with.
Feedback can be either personal or about a process that can be improved in OpenCraft. So if you feel like you need feedback about anything related to the task (including metawork), post a comment like this:
Hey <reviewer>! Can you post some feedback about my work on this ticket? 1. What went well? 2. What can be improved? 3. <Any other questions that you might have>
This column indicates that the ticket has been reviewed by the cell's Sprint Manager, who will double-check the following criteria before moving the ticket to this column:
The client does not need to have signed off on the work before we consider it done. If the client finds a bug, simply move the ticket back to "In Progress" if it's still in the same sprint, or create a bugfix ticket if the sprint is over. If the client requests additional changes/features, create a new ticket for the next sprint.
Recurring tasks are part of every sprint.
These are tasks that happen on a regular basis. When you see a task in the "Recurring" column, then you expect to see work on this task and a time budget allocated for it in every sprint. These kinds of tasks are often helpful in cases like mentoring new joiners, team meetings, etc.
Just like regular tasks, recurring tasks start in the "Backlog" and JIRA puts them in "This week" after they get pulled into a sprint for the first time. The assignee of a recurring task should move the task from "This week" straight to "Recurring" at the beginning of the first sprint that includes the task. If the work belonging to a recurring task ends (e.g. because we finished the project that it belongs to), the assignee should move it to "Merged" before the end of the current sprint.
Long External Review/Blocked#
Tasks in this category are not part of the current sprint.
This category is for tasks that are waiting for an external review or some other external requirement, and are not expected to be unblocked before the end of the current sprint. If any tasks in this category are assigned to you, you should review them once a week to see if you need to ping the external party to remind them to review/unblock it, or if you are ready to pull it back into a sprint.
These are the main views you'll use:
- the backlog view (where you see 1 task per row), while planning sprints. Find the link for your cell
- the weekly sprint board (where you see columns), while working on the current sprint. Find the link for your cell
tempo views, for time logging. Refer to these instructions
- estimation session (you'll receive a link)
- and of course the individual task view. Use the Edit button to modify any task field. Note that some advanced operations, like classifying a task as newcomer-friendly, cannot be done from the task itself and are done from the backlog or sprint views
Colors have meanings in some contexts but are arbitrary in others. - In the backlog view (where you see 1 task per row), a task's left margin can be yellow, green or other colors. Refer to these explanations. - In the backlog view you'll also see epic names at the right. Each epic is shown in a different color, that we assign arbitrarily.
Task icons have meanings: there's an icon per task type, and an icon per task priority.
JIRA is often slow. Refreshing the backlog page through the browser's refresh button can take a long time. You can do a faster refresh of the current view by just clicking the backlog button again (in the left bar).
We don't use all JIRA fields and you can ignore the unneeded ones. See the list of basic fields to use in tasks.
You can also ignore some inconsistencies, like: - the External review / Blocker status is shown as Upstream PR in the workflow - a subtask shows up with the type Technical task instead of the usual Story - flags are called impediments in some places - some version fields are used to mean newcomer-friendliness