I need to save JSON data to an Oracle database. The JSON looks like this(see below). But it doesn't stay in the same format. I might add some additional nodes or modify existing ones. So is it possible to create or modify oracle tables dynamically to add more columns? I was going to do that with Java. I will create a Java class matching the JSON, convert JSON to Java object and persist it to the table. But how can I modify Java class dynamically? Or would it be better idea to do that with PL/SQL? The JSON comes from a mobile device to a REST web service.
{"menu": {
"id": "file",
"value": "File",
"popup": {
"menuitem": [
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}}
I would suggest that you avoid creating new columns, and instead create a new table that will contain one entry for each of what would have been the new columns. I'm assuming here that the new columns would be menu items. So you would have a "menu" table with these columns:
id file
and you would have a "menuitem" table which would contain one entry for each of your menu items:
id value onclick
So instead of adding columns dynamically, you would be adding records.
I suggested in the comments to change your approach to a NoSQL database like MongoDB. However, if you still feel you need to use a relational database, maybe the EAV model can point you in the right direction.
In summay, you would have a "helper" table that stores which columns an Entity has and their types.
You cannot modify a Java class but you can define the class like a Map and implement the logic to add the desired columns.
Magento, a PHP product, uses EAV in its database.
Mongodb may be your best choice, or you could have a large TEXT field and only extract the columns you are likely to search one.
However, you can CREATE TABLE for additional normalised data and ALTER TABLE to add a column. The later can be particularity expensive.
Use https://github.com/zolekode/json-to-tables/.
Here you go:
import json
from core.extent_table import ExtentTable
from core.table_maker import TableMaker
menu = {
"id": "file",
"value": "File",
"popup": {
"menuitem": [
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}
menu = json.dumps(menu)
menu = json.loads(menu)
extent_table = ExtentTable()
table_maker = TableMaker(extent_table)
table_maker.convert_json_object_to_table(menu, "menu")
table_maker.show_tables(8)
table_maker.save_tables("menu", export_as="sql", sql_connection="your_connection")
Output:
SHOWING TABLES :D
menu
ID id value popup
0 0 file File 0
1 1 None None None
____________________________________________________
popup
ID
0 0
1 1
____________________________________________________
popup_?_menuitem
ID PARENT_ID is_scalar scalar
0 0 0 False None
1 1 0 False None
2 2 0 False None
____________________________________________________
popup_?_menuitem_$_onclick
ID value onclick PARENT_ID
0 0 New CreateNewDoc() 0
1 1 Open OpenDoc() 1
2 2 Close CloseDoc() 2
3 3 None None None
____________________________________________________
This can be done in MYSQL database:
This code takes a JSON input string and automatically generates
SQL Server CREATE TABLE statements to make it easier
to convert serialized data into a database schema.
It is not perfect, but should provide a decent starting point when starting
to work with new JSON files.
SET NOCOUNT ON;
DECLARE
#JsonData nvarchar(max) = '
{
"Id" : 1,
"IsActive":true,
"Ratio": 1.25,
"ActivityArray":[true,false,true],
"People" : ["Jim","Joan","John","Jeff"],
"Places" : [{"State":"Connecticut", "Capitol":"Hartford", "IsExpensive":true},{"State":"Ohio","Capitol":"Columbus","MajorCities":["Cleveland","Cincinnati"]}],
"Thing" : { "Type":"Foo", "Value" : "Bar" },
"Created_At":"2018-04-18T21:25:48Z"
}',
#RootTableName nvarchar(4000) = N'AppInstance',
#Schema nvarchar(128) = N'dbo',
#DefaultStringPadding smallint = 20;
DROP TABLE IF EXISTS ##parsedJson;
WITH jsonRoot AS (
SELECT
0 as parentLevel,
CONVERT(nvarchar(4000),NULL) COLLATE Latin1_General_BIN2 as parentTableName,
0 AS [level],
[type] ,
#RootTableName COLLATE Latin1_General_BIN2 AS TableName,
[key] COLLATE Latin1_General_BIN2 as ColumnName,
[value],
ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS ColumnSequence
FROM
OPENJSON(#JsonData, '$')
UNION ALL
SELECT
jsonRoot.[level] as parentLevel,
CONVERT(nvarchar(4000),jsonRoot.TableName) COLLATE Latin1_General_BIN2,
jsonRoot.[level]+1,
d.[type],
CASE WHEN jsonRoot.[type] IN (4,5) THEN CONVERT(nvarchar(4000),jsonRoot.ColumnName) ELSE jsonRoot.TableName END COLLATE Latin1_General_BIN2,
CASE WHEN jsonRoot.[type] IN (4) THEN jsonRoot.ColumnName ELSE d.[key] END,
d.[value],
ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS ColumnSequence
FROM
jsonRoot
CROSS APPLY OPENJSON(jsonRoot.[value], '$') d
WHERE
jsonRoot.[type] IN (4,5)
), IdRows AS (
SELECT
-2 as parentLevel,
null as parentTableName,
-1 as [level],
null as [type],
TableName as Tablename,
TableName+'Id' as columnName,
null as [value],
0 as columnsequence
FROM
(SELECT DISTINCT tablename FROM jsonRoot) j
), FKRows AS (
SELECT
DISTINCT -1 as parentLevel,
null as parentTableName,
-1 as [level],
null as [type],
TableName as Tablename,
parentTableName+'Id' as columnName,
null as [value],
0 as columnsequence
FROM
(SELECT DISTINCT tableName,parentTableName FROM jsonRoot) j
WHERE
parentTableName is not null
)
SELECT
*,
CASE [type]
WHEN 1 THEN
CASE WHEN TRY_CONVERT(datetime2, [value], 127) IS NULL THEN 'nvarchar' ELSE 'datetime2' END
WHEN 2 THEN
CASE WHEN TRY_CONVERT(int, [value]) IS NULL THEN 'float' ELSE 'int' END
WHEN 3 THEN
'bit'
END COLLATE Latin1_General_BIN2 AS DataType,
CASE [type]
WHEN 1 THEN
CASE WHEN TRY_CONVERT(datetime2, [value], 127) IS NULL THEN MAX(LEN([value])) OVER (PARTITION BY TableName, ColumnName) + #DefaultStringPadding ELSE NULL END
WHEN 2 THEN
NULL
WHEN 3 THEN
NULL
END AS DataTypePrecision
INTO ##parsedJson
FROM jsonRoot
WHERE
[type] in (1,2,3)
UNION ALL SELECT IdRows.parentLevel, IdRows.parentTableName, IdRows.[level], IdRows.[type], IdRows.TableName, IdRows.ColumnName, IdRows.[value], -10 AS ColumnSequence, 'int IDENTITY(1,1) PRIMARY KEY' as datatype, null as datatypeprecision FROM IdRows
UNION ALL SELECT FKRows.parentLevel, FKRows.parentTableName, FKRows.[level], FKRows.[type], FKRows.TableName, FKRows.ColumnName, FKRows.[value], -9 AS ColumnSequence, 'int' as datatype, null as datatypeprecision FROM FKRows
-- For debugging:
-- SELECT * FROM ##parsedJson ORDER BY ParentLevel, level, tablename, columnsequence
DECLARE #CreateStatements nvarchar(max);
SELECT
#CreateStatements = COALESCE(#CreateStatements + CHAR(13) + CHAR(13), '') +
'CREATE TABLE ' + #Schema + '.' + TableName + CHAR(13) + '(' + CHAR(13) +
STRING_AGG( ColumnName + ' ' + DataType + ISNULL('('+CAST(DataTypePrecision AS nvarchar(20))+')','') + CASE WHEN DataType like '%PRIMARY KEY%' THEN '' ELSE ' NULL' END, ','+CHAR(13)) WITHIN GROUP (ORDER BY ColumnSequence)
+ CHAR(13)+')'
FROM
(SELECT DISTINCT
j.TableName,
j.ColumnName,
MAX(j.ColumnSequence) AS ColumnSequence,
j.DataType,
j.DataTypePrecision,
j.[level]
FROM
##parsedJson j
CROSS APPLY (SELECT TOP 1 ParentTableName + 'Id' AS ColumnName FROM ##parsedJson p WHERE j.TableName = p.TableName ) p
GROUP BY
j.TableName, j.ColumnName,p.ColumnName, j.DataType, j.DataTypePrecision, j.[level]
) j
GROUP BY
TableName
PRINT #CreateStatements;
You can find the solution on https://bertwagner.com/posts/converting-json-to-sql-server-create-table-statements/
ALso JSON can be converted to a POJO class in JAVA :
package com.cooltrickshome;
2
import java.io.File;
3
import java.io.IOException;
4
import java.net.MalformedURLException;
5
import java.net.URL;
6
import org.jsonschema2pojo.DefaultGenerationConfig;
7
import org.jsonschema2pojo.GenerationConfig;
8
import org.jsonschema2pojo.Jackson2Annotator;
9
import org.jsonschema2pojo.SchemaGenerator;
10
import org.jsonschema2pojo.SchemaMapper;
11
import org.jsonschema2pojo.SchemaStore;
12
import org.jsonschema2pojo.SourceType;
13
import org.jsonschema2pojo.rules.RuleFactory;
14
import com.sun.codemodel.JCodeModel;
15
public class JsonToPojo {
16
/**
17
* #param args
18
*/
19
public static void main(String[] args) {
20
String packageName="com.cooltrickshome";
21
File inputJson= new File("."+File.separator+"input.json");
22
File outputPojoDirectory=new File("."+File.separator+"convertedPojo");
23
outputPojoDirectory.mkdirs();
24
try {
25
new JsonToPojo().convert2JSON(inputJson.toURI().toURL(), outputPojoDirectory, packageName, inputJson.getName().replace(".json", ""));
26
} catch (IOException e) {
27
// TODO Auto-generated catch block
28
System.out.println("Encountered issue while converting to pojo: "+e.getMessage());
29
e.printStackTrace();
30
}
31
}
32
public void convert2JSON(URL inputJson, File outputPojoDirectory, String packageName, String className) throws IOException{
33
JCodeModel codeModel = new JCodeModel();
34
URL source = inputJson;
35
GenerationConfig config = new DefaultGenerationConfig() {
36
#Override
37
public boolean isGenerateBuilders() { // set config option by overriding method
38
return true;
39
}
40
public SourceType getSourceType(){
41
return SourceType.JSON;
42
}
43
};
44
SchemaMapper mapper = new SchemaMapper(new RuleFactory(config, new Jackson2Annotator(config), new SchemaStore()), new SchemaGenerator());
45
mapper.generate(codeModel, className, packageName, source);
46
codeModel.build(outputPojoDirectory);
47
}
48
}
Related
I am using spark-sql-2.4.1v with java8.
I have scenario like below
List data = List(
("20", "score", "school", "2018-03-31", 14 , 12 , 20),
("21", "score", "school", "2018-03-31", 13 , 13 , 21),
("22", "rate", "school", "2018-03-31", 11 , 14, 22),
("21", "rate", "school", "2018-03-31", 13 , 12, 23)
)
Dataset<Row> df = = data.toDF("id", "code", "entity", "date", "column1", "column2" ,"column3")
Dataset<Row> resultDs = df
.withColumn("column_names",
array(Arrays.asList(df.columns()).stream().map(s -> new Column(s)).toArray(Column[]::new))
);
**But this is showing respective row columns values instread of column names.
so what is wrong here ? how to get "column_names" in java **
I am trying to solve below use-case:
Lets say i have 100 columns like column1....to column100 ... each column calculation would be different depend on the column name and data .... but every time i run my spark job i will get which columns i need to calculate ... but in my code i will have all columns logic i.e. each column logic might be different ... i need to ignore the logic of unspecified columns... but as the dataframe contain all columns i am selecting specified columns..so for non-selected columns my code throws exception as the column not found ...i need to fix this
I tried to send json contain array of json to procedure and this procedure will add the data in two tables, i build the below procedure:
create or replace PROCEDURE INSERT_ELE
(
ELEMENT_SET_NAME IN VARCHAR2
, ELEMENT_SET_TYPE VARCHAR2
, EFFECTIVE_START_DATE IN VARCHAR2
, EFFECTIVE_END_DATE IN VARCHAR2
, ELEMENT_TYPE_ID IN VARCHAR2
, OUT_SEQ OUT NUMBER
) AS
BEGIN
INSERT ALL INTO payroll_test.PAY_ELEMENT_SETS(ELEMENT_SET_NAME, ELEMENT_SET_TYPE, EFFECTIVE_START_DATE, EFFECTIVE_END_DATE)
VALUES (ELEMENT_SET_NAME, ELEMENT_SET_TYPE, EFFECTIVE_START_DATE, EFFECTIVE_END_DATE)
INTO payroll_test.PAY_ELEMENT_SET_MEMBERS(ELEMENT_TYPE_ID, ELEMENT_SET_ID)
VALUES (ELEMENT_TYPE_ID, PAY_ELEMENT_SETS_MEMBERS_SEQ.NEXTVAL +1)
select ELEMENT_SET_NAME as ELEMENT_SET_NAME, ELEMENT_SET_TYPE as ELEMENT_SET_TYPE, EFFECTIVE_START_DATE as EFFECTIVE_START_DATE, EFFECTIVE_END_DATE as EFFECTIVE_END_DATE, ELEMENT_TYPE_ID as ELEMENT_TYPE_ID from dual;
COMMIT;
END INSERT_ELE
And this is my java code:
String query = null;
query = "{call INSERT_ELE(?,?,?,?,?,?)}";
cstmt = connection.prepareCall(query);
cstmt.setString(1, objectGroupFormBean.getElementSetName());
cstmt.setString(2, objectGroupFormBean.getElementSetType());
cstmt.setString(3, objectGroupFormBean.getEffectiveStartDate());
cstmt.setString(4, objectGroupFormBean.getEffectiveEndDate());
cstmt.setString(5, objectGroupFormBean.getElementTypeId());
cstmt.registerOutParameter(6, OracleTypes.NUMBER);
cstmt.executeUpdate();
objectGroupFormBean.setElementSetId(cstmt.getInt(6));
objectGroupFormBean.getElementSetId();
objectGroupFormBeanList.add(objectGroupFormBean);
but this just accept like this payload:
{
"elementSetName": "test",
"elementSetType": "App",
"effectiveStartDate": "10-10-1981",
"effectiveEndDate": "20-08-2020",
"element":
{
"elementId": "181",
"inclusionStatus": "Include"
}
}
How i can make it to take element as json of objects like this:
{
"elementSetName": "test",
"elementSetType": "App",
"effectiveStartDate": "10-10-1981",
"effectiveEndDate": "20-08-2020",
"element": [
{
"elementId": "181",
"inclusionStatus": "Include"
},
{
"elementId": "189",
"inclusionStatus": "Include"
}
]
}
How about flipping this around to avoid all the parameter passing. Just send the JSON to the procedure and let it do the insertion, eg
SQL> with t as
2 (
3 select
4 '{
5 "elementSetName": "test",
6 "elementSetType": "App",
7 "effectiveStartDate": "10-10-1981",
8 "effectiveEndDate": "20-08-2020",
9 "element": [
10 {
11 "elementId": "181",
12 "inclusionStatus": "Include"
13 },
14 {
15 "elementId": "189",
16 "inclusionStatus": "Include"
17 }
18 ]
19 }' j from dual
20 )
21 select
22 json_value(j,'$.elementSetName') name,
23 json_value(j,'$.elementSetType') type,
24 json_value(j,'$.effectiveStartDate') sdate,
25 json_value(j,'$.effectiveEndDate') edate
26 from t;
NAME TYPE SDATE EDATE
--------------- --------------- --------------- ---------------
test App 10-10-1981 20-08-2020
1 row selected.
SQL>
SQL> with t as
2 (
3 select
4 '{
5 "elementSetName": "test",
6 "elementSetType": "App",
7 "effectiveStartDate": "10-10-1981",
8 "effectiveEndDate": "20-08-2020",
9 "element": [
10 {
11 "elementId": "181",
12 "inclusionStatus": "Include"
13 },
14 {
15 "elementId": "189",
16 "inclusionStatus": "Include"
17 }
18 ]
19 }' j from dual
20 )
21 select jt.*
22 from t,
23 json_table(j,'$.element[*]' columns ( elementId number path '$.elementId',
24 inclusionStatus varchar2(20) path '$.inclusionStatus')) jt;
ELEMENTID INCLUSIONSTATUS
---------- --------------------
181 Include
189 Include
2 rows selected.
SQL>
SQL>
SQL>
Thus your procedure would end up being
procedure INS(p_json varchar2) is
begin
insert into ...
select ...
end;
where the SELECT is simply based on the ones above
I have a database with next scheme, that uses postgressql and the postgis plugin:
table_id | id | mag | time | felt | tsunami | geom
I have a next SQL to select some rows and return those columns as a JSON:
SELECT ROW_TO_JSON(t) as properties
FROM (
SELECT id, mag, time, felt, tsunami FROM earthquakes
) t
I would like to create an SQL sentence that returns the table_id, properties and geom like:
SELECT table_id, properties, GeometryType(geom)
from earthquakes
How can I return the table_id and geom with properties as a JSON?
Edit:
I've created this sql:
SELECT table_id,
row_to_json((SELECT d FROM (SELECT id, mag, time, felt, tsunami ) d)) AS properties,
GeometryType(geom)
FROM earthquakes ORDER BY table_id ASC;
But when I do a request with postman, it returns this:
[
{
"table_id": 1,
"properties": {
"type": "json",
"value": "{\"id\" : \"ak16994521\", \"mag\" : 2.3}"
}
},
...
]
How can I return the values as an object?
My expected result should be:
[
{
"table_id": 1,
"properties": {"id" : "ak16994521", "mag" : 2.3}
},
...
]
Java Method:
public List<Map<String, Object>> readTable(String nameTable) {
try {
String SQL = "SELECT table_id, GeometryType(geom) FROM " + nameTable + " ORDER BY table_id ASC;";
return jdbcTemplate.queryForList(SQL);
} catch( BadSqlGrammarException error) {
log.info("ERROR READING TABLE: " + nameTable);
return null;
}
}
whit this code returns this json:
[
{
"table_id": 1,
"geometrytype": "POINT"
},
{
"table_id": 2,
"geometrytype": "POINT"
},
....
]
My expected result should be:
[
{
"table_id": 1,
"properties": {"id" : "ak16994521", "mag" : 2.3, "time": 1507425650893, "felt": "null", "tsunami": 0 },
"geometrytype": "POINT"
},
...
]
Why not do the whole conversion to JSON on the database?
SELECT json_agg(x) as json_values FROM (
SELECT
table_id,
row_to_json((select d from (select id, mag, time, felt, tsunami) d)) as properties,
GeometryType(geom)
FROM earthquakes
ORDER BY table_id ASC
) x;
I found this sql that creates a geojson file, so when I call method returns perfect as string serialized.
SELECT row_to_json(fc) FROM (
SELECT 'FeatureCollection' As type,
array_to_json(array_agg(f)) As features FROM
(SELECT 'Feature' As type,
ST_AsGeoJSON(lg.geom)::json As geometry,
row_to_json((SELECT l FROM (SELECT id, mag, time, felt, tsunami) As l )) As properties FROM terremotos As lg ) As f ) As fc;
I use DataTables with server-side processing. The json object I receive contains an array of LocalDateTime element:
...
"SimpleDate": [ 2000,12,31,0,0 ]
...
My columns definition in the initialization script is the following:
"columns": [
{ "data": "SimpleDate"}
]
By default, the column is rendered comma-separated: 2000,12,31,0,0
How can I change it to 31.12.2000?
I tried columnDefsand render like:
"columnDefs": [
{
"render": function ( data, type, row ) {
return data.2 + '.' + data.1 + '.' + data.0;
},
"targets": 0
}
but this simply stops the table from rendering. I assume, accessing the array via data.xis not possible in this state.
So, how do I do it?
You are not accessing the elements of the data array properly.
"render": function ( data, type, row ) {
return data[2] + '.' + data[1] + '.' + data[0];
},
Try something like below.
"columnDefs": ["targets": 0 , "data": "SimpleDate","render": function ( data, type, row ) { return data[2] + '.' + data[1]+ '.' + data[0]; }}
I need to present data from a Derby database in a JTable, but two of the columns are aggregate sums from two one-to-many related tables. Here are example schema:
SHIFTDATA:
ID
DATE
SHIFT
FOOD_COST
OFFICE_SUPPLIES
REP_MAINT
NET_SALES
SALES_TAX
OTHERPAIDOUTS:
ID
SHIFTDATA_ID
LABEL
AMOUNT
DISCOUNTS
ID
SHIFTDATA_ID
DISCOUNT_NAME
AMOUNT
There are 0 or more OTHERPAIDOUTS for a given SHIFTDATA
There are 0 or more DISCOUNTS for a given SHIFTDATA
I need the equivalent of this statement, though I know I can’t combine aggregate expressions with "non-aggregate expressions" in a SELECT statement:
SELECT (S.FOOD_COST + S.OFFICE_SUPPLIES + S.REP_MAINT + SUM(O.AMOUNT)) AS TOTAL_PAIDOUTS,
SUM(D.AMOUNT) AS TOTAL_DISCOUNT,
S.NET_SALES,
S.SALES_TAX
FROM SHIFTDATA S, OTHERPAIDOUTS O, DISCOUNTS D WHERE O.SHIFTDATA_ID=S.ID AND D.SHIFTDATA_ID=S.ID
I see in other threads where adding the GROUP BY clause fixes these situations, but I guess adding in the second aggregate from a third table is throwing me off. I tried GROUP BY S.NET_SALES, S.SALES_TAX, and adding AND S.ID = 278 to the WHERE clause to get a known result, and the TOTAL_PAIDOUTS is correct (there are 3 related records in OTHERPAIDOUTS), but the returned TOTAL_DISCOUNTS is 3 times what it should be.
Needless to say, I’m not a SQL programmer! Hopefully you get the gist of what I’m after. I tried nested SELECT statements but just made a mess of it. This application is still in development, including the database structure, so if a different DB structure would simplify things, that may be an option. Or, if there's another way to programmatically build the table model, I'm open to that as well. Thanks in advance!!
======== Edit =============
In order to check the values from a known record, I'm querying with a specific SHIFTDATA.ID. Following is the sample table records:
SHIFTDATA:
ID |FOOD_COST |OFFICE_SU&|REP_MAINT |NET_SALES |SALES_TAX
------------------------------------------------------
278 |0.00 |5.00 |10.00 |3898.78 |319.79
OTHERPAIDOUTS:
ID |SHIFTDATA_&|LABEL |AMOUNT
---------------------------------------------------------------------------
37 |278 |FOOD COST FUEL |52.00
38 |278 |MAINT FUEL |5.00
39 |278 |EMPLOYEE SHOES |21.48
DISCOUNTS:
ID |ITEM_NAME |SHIFTDATA_&|AMOUNT
---------------------------------------------------------------------------
219 |Misc Discounts |278 |15.91
What I expect to see for this SHIFTDATA row in the JTable:
TOTAL_PAIDOUTS | TOTAL_DISCOUNT |NET_SALES |SALES_TAX
------------------------------------------------------
93.48 |15.91 |3898.78 |319.79
The best I can get is by adding the GROUP BY clause, but grouping by the fields from SHIFTDATA I get:
TOTAL_PAIDOUTS | TOTAL_DISCOUNT |NET_SALES |SALES_TAX
------------------------------------------------------
93.48 |47.73 |3898.78 |319.79
Here is the SQL query with required result.
Here are the table definitions, data, sql and the results:
CREATE TABLE shiftdata (
id int,
foodcost int,
officesuppl int,
repmaint int,
netsales int,
salestax int);
CREATE TABLE otherpaidouts (
id int,
shiftid int,
label varchar(20),
amount int);
CREATE TABLE discounts (
id int,
shiftid int,
itemname varchar(20),
amount int);
Create data for two shifts: 278 and 333. Both shifts have discounts, but only 278 shift has the otherpaidouts.
insert into shiftdata values (278, 0, 5, 10, 3898, 319);
insert into shiftdata values (333, 22, 15, 100, 2111, 88);
insert into otherpaidouts values (37, 278, 'Food Cost FUEL', 52);
insert into otherpaidouts values (38, 278, 'Maint FUEL', 5);
insert into otherpaidouts values (39, 278, 'Empl SHOES', 21);
insert into discounts values (219, 278, 'Misc DISCOUNTS', 15);
insert into discounts values (312, 333, 'Misc DISCOUNTS', 25);
The Query:
SELECT sd.id, sd.netsales, sd.salestax,
IFNULL(
(SELECT SUM(d.amount) FROM discounts d WHERE d.shiftid=sd.id), 0) AS total_discount,
(SELECT sd.foodcost + sd.officesuppl + sd.repmaint + IFNULL(SUM(op.amount), 0) FROM otherpaidouts op WHERE op.shiftid=sd.id) AS total_paidouts
FROM shiftdata sd;
The Result:
+------+----------+----------+----------------+----------------+
| id | netsales | salestax | total_discount | total_paidouts |
+------+----------+----------+----------------+----------------+
| 278 | 3898 | 319 | 15 | 93 |
| 333 | 2111 | 88 | 25 | 137 |
+------+----------+----------+----------------+----------------+
Try a LEFT OUTER JOIN, something like this:
SELECT S.FOOD_COST + S.OFFICE_SUPPLIES + S.REP_MAINT + SUM(O.AMOUNT) AS TOTAL_PAIDOUTS,
SUM(D.AMOUNT) AS TOTAL_DISCOUNT,
S.NET_SALES,
S.SALES_TAX
FROM SHIFTDATA S
LEFT JOIN OTHERPAIDOUTS AS O ON O.SHIFTDATA_ID = S.ID
LEFT JOIN DISCOUNTS AS D ON D.SHIFTDATA_ID = S.ID
Edit
SELECT S.FOOD_COST + S.OFFICE_SUPPLIES + S.REP_MAINT +
( SELECT COALESCE(SUM(AMOUNT), 0) FROM OTHERPAIDOUTS WHERE SHIFTDATA_ID = S.ID ) AS TOTAL_PAIDOUTS,
( SELECT COALESCE(SUM(AMOUNT), 0) FROM DISCOUNTS WHERE SHIFTDATA_ID = S.ID ) AS TOTAL_DISCOUNT,
S.NET_SALES,
S.SALES_TAX
FROM SHIFTDATA S